Sample records for compressible hydrodynamics codes

  1. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGES

    Laney, Daniel; Langer, Steven; Weber, Christopher; ...

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  2. Implementation of Hydrodynamic Simulation Code in Shock Experiment Design for Alkali Metals

    NASA Astrophysics Data System (ADS)

    Coleman, A. L.; Briggs, R.; Gorman, M. G.; Ali, S.; Lazicki, A.; Swift, D. C.; Stubley, P. G.; McBride, E. E.; Collins, G.; Wark, J. S.; McMahon, M. I.

    2017-10-01

    Shock compression techniques enable the investigation of extreme P-T states. In order to probe off-Hugoniot regions of P-T space, target makeup and laser pulse parameters must be carefully designed. HYADES is a hydrodynamic simulation code which has been successfully utilised to simulate shock compression events and refine the experimental parameters required in order to explore new P-T states in alkali metals. Here we describe simulations and experiments on potassium, along with the techniques required to access off-Hugoniot states.

  3. Hydrocode and Molecular Dynamics modelling of uniaxial shock wave experiments on Silicon

    NASA Astrophysics Data System (ADS)

    Stubley, Paul; McGonegle, David; Patel, Shamim; Suggit, Matthew; Wark, Justin; Higginbotham, Andrew; Comley, Andrew; Foster, John; Rothman, Steve; Eggert, Jon; Kalantar, Dan; Smith, Ray

    2015-06-01

    Recent experiments have provided further evidence that the response of silicon to shock compression has anomalous properties, not described by the usual two-wave elastic-plastic response. A recent experimental campaign on the Orion laser in particular has indicated a complex multi-wave response. While Molecular Dynamics (MD) simulations can offer a detailed insight into the response of crystals to uniaxial compression, they are extremely computationally expensive. For this reason, we are adapting a simple quasi-2D hydrodynamics code to capture phase change under uniaxial compression, and the intervening mixed phase region, keeping track of the stresses and strains in each of the phases. This strain information is of such importance because a large number of shock experiments use diffraction as a key diagnostic, and these diffraction patterns depend solely on the elastic strains in the sample. We present here a comparison of the new hydrodynamics code with MD simulations, and show that the simulated diffraction taken from the code agrees qualitatively with measured diffraction from our recent Orion campaign.

  4. PELEC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PeleC is an adaptive-mesh compressible hydrodynamics code for reacting flows. It solves the compressible Navier-Stokes with multispecies transport in a block structured framework. The resulting algorithm is well suited for flows with localized resolution requirements and robust to discontinuities. User controllable refinement crieteria has the potential to result in extremely small numerical dissipation and dispersion, making this code appropriate for both research and applied usage. The code is built on the AMReX library which facilitates hierarchical parallelism and manages distributed memory parallism. PeleC algorithms are implemented to express shared memory parallelism.

  5. GASOLINE: Smoothed Particle Hydrodynamics (SPH) code

    NASA Astrophysics Data System (ADS)

    N-Body Shop

    2017-10-01

    Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.

  6. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  7. Maestro and Castro: Simulation Codes for Astrophysical Flows

    NASA Astrophysics Data System (ADS)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  8. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    NASA Astrophysics Data System (ADS)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied CRKSPH to a number of astrophysical scenarios, such as rotating gaseous disks, supernova remnants, and large-scale cosmological structure formation. In this poster we present an overview of CRKSPH and show examples of these astrophysical applications.

  9. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, W.; Almgren, A.; Bell, J.

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less

  10. The escape of high explosive products: An exact-solution problem for verification of hydrodynamics codes

    DOE PAGES

    Doebling, Scott William

    2016-10-22

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  11. Numerical Viscosity and the Survival of Gas Giant Protoplanets in Disk Simulations

    NASA Astrophysics Data System (ADS)

    Pickett, Megan K.; Durisen, Richard H.

    2007-01-01

    We present three-dimensional hydrodynamic simulations of a gravitationally unstable protoplanetary disk model under the condition of local isothermality. Ordinarily, local isothermality precludes the need for an artificial viscosity (AV) scheme to mediate shocks. Without AV, the disk evolves violently, shredding into dense (although short-lived) clumps. When we introduce our AV treatment in the momentum equation, but without heating due to irreversible compression, our grid-based simulations begin to resemble smoothed particle hydrodynamics (SPH) calculations, where clumps are more likely to survive many orbits. In fact, the standard SPH viscosity appears comparable in strength to the AV that leads to clump longevity in our code. This sensitivity to one numerical parameter suggests extreme caution in interpreting simulations by any code in which long-lived gaseous protoplanetary bodies appear.

  12. White Dwarf Mergers On Adaptive Meshes. I. Methodology And Code Verification

    DOE PAGES

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; ...

    2016-03-02

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first study in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this papermore » we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Finally, future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.« less

  13. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, H.; Lee, S.; Shiroto, T.

    2016-06-20

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm 2. Lastly, the temporalmore » evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.« less

  14. Flash Kα radiography of laser-driven solid sphere compression for fast ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, H.; Lee, S.; Nagatomo, H.

    2016-06-20

    Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm{sup 2}. The temporal evolution of the experimental andmore » simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.« less

  15. Thermonuclear targets for direct-drive ignition by a megajoule laser pulse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bel’kov, S. A.; Bondarenko, S. V.; Vergunova, G. A.

    2015-10-15

    Central ignition of a thin two-layer-shell fusion target that is directly driven by a 2-MJ profiled pulse of Nd laser second-harmonic radiation has been studied. The parameters of the target were selected so as to provide effective acceleration of the shell toward the center, which was sufficient for the onset of ignition under conditions of increased hydrodynamic stability of the ablator acceleration and compression. The aspect ratio of the inner deuterium-tritium layer of the shell does not exceed 15, provided that a major part (above 75%) of the outer layer (plastic ablator) is evaporated by the instant of maximum compression.more » The investigation is based on two series of numerical calculations that were performed using one-dimensional (1D) hydrodynamic codes. The first 1D code was used to calculate the absorption of the profiled laser-radiation pulse (including calculation of the total absorption coefficient with allowance for the inverse bremsstrahlung and resonance mechanisms) and the spatial distribution of target heating for a real geometry of irradiation using 192 laser beams in a scheme of focusing with a cubo-octahedral symmetry. The second 1D code was used for simulating the total cycle of target evolution under the action of absorbed laser radiation and for determining the thermonuclear gain that was achieved with a given target.« less

  16. One-Dimensional Burn Dynamics of Plasma-Jet Magneto-Inertial Fusion

    NASA Astrophysics Data System (ADS)

    Santarius, John

    2009-11-01

    This poster will discuss several issues related to using plasma jets to implode a Magneto-Inertial Fusion (MIF) liner onto a magnetized plasmoid and compress it to fusion-relevant temperatures [1]. The problem of pure plasma jet convergence and compression without a target present will be investigated. Cases with a target present will explore how well the liner's inertia provides transient plasma stability and confinement. The investigation uses UW's 1-D Lagrangian radiation-hydrodynamics code, BUCKY, which solves single-fluid equations of motion with ion-electron interactions, PdV work, table-lookup equations of state, fast-ion energy deposition, and pressure contributions from all species. Extensions to the code include magnetic field evolution as the plasmoid compresses plus dependence of the thermal conductivity and fusion product energy deposition on the magnetic field.[4pt] [1] Y.C. F. Thio, et al.,``Magnetized Target Fusion in a Spheroidal Geometry with Standoff Drivers,'' in Current Trends in International Fusion Research, E. Panarella, ed. (National Research Council of Canada, Ottawa, Canada, 1999), p. 113.

  17. Computational-hydrodynamic studies of the Noh compressible flow problem using non-ideal equations of state

    NASA Astrophysics Data System (ADS)

    Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott

    2017-06-01

    The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.

  18. Observation of Compressible Plasma Mix in Cylindrically Convergent Implosions

    NASA Astrophysics Data System (ADS)

    Barnes, Cris W.; Batha, Steven H.; Lanier, Nicholas E.; Magelssen, Glenn R.; Tubbs, David L.; Dunne, A. M.; Rothman, Steven R.; Youngs, David L.

    2000-10-01

    An understanding of hydrodynamic mix in convergent geometry will be of key importance in the development of a robust ignition/burn capability on NIF, LMJ and future pulsed power machines. We have made use of the OMEGA laser facility at the University of Rochester to investigate directly the mix evolution in a convergent geometry, compressible plasma regime. The experiments comprise a plastic cylindrical shell imploded by direct laser irradiation. The cylindrical shell surrounds a lower density plastic foam which provides sufficient back pressure to allow the implosion to stagnate at a sufficiently high radius to permit quantitative radiographic diagnosis of the interface evolution near turnaround. The susceptibility to mix of the shell-foam interface is varied by choosing different density material for the inner shell surface (thus varying the Atwood number). This allows the study of shock-induced Richtmyer-Meshkov growth during the coasting phase, and Rayleigh-Taylor growth during the stagnation phase. The experimental results will be described along with calculational predictions using various radiation hydrodynamics codes and turbulent mix models.

  19. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  20. Plasma-Jet Magneto-Inertial Fusion Burn Calculations

    NASA Astrophysics Data System (ADS)

    Santarius, John

    2010-11-01

    Several issues exist related to using plasma jets to implode a Magneto-Inertial Fusion (MIF) liner onto a magnetized plasmoid and compress it to fusion-relevant temperatures [1]. The poster will explore how well the liner's inertia provides transient plasma confinement and affects the burn dynamics. The investigation uses the University of Wisconsin's 1-D Lagrangian radiation-hydrodynamics code, BUCKY, which solves single-fluid equations of motion with ion-electron interactions, PdV work, table-lookup equations of state, fast-ion energy deposition, pressure contributions from all species, and one or two temperatures. Extensions to the code include magnetic field evolution as the plasmoid compresses plus dependence of the thermal conductivity on the magnetic field. [4pt] [1] Y.C. F. Thio, et al.,``Magnetized Target Fusion in a Spheroidal Geometry with Standoff Drivers,'' in Current Trends in International Fusion Research, E. Panarella, ed. (National Research Council of Canada, Ottawa, Canada, 1999), p. 113.

  1. Three-dimensional modeling of the neutron spectrum to infer plasma conditions in cryogenic inertial confinement fusion implosions

    NASA Astrophysics Data System (ADS)

    Weilacher, F.; Radha, P. B.; Forrest, C.

    2018-04-01

    Neutron-based diagnostics are typically used to infer compressed core conditions such as areal density and ion temperature in deuterium-tritium (D-T) inertial confinement fusion (ICF) implosions. Asymmetries in the observed neutron-related quantities are important to understanding failure modes in these implosions. Neutrons from fusion reactions and their subsequent interactions including elastic scattering and neutron-induced deuteron breakup reactions are tracked to create spectra. It is shown that background subtraction is important for inferring areal density from backscattered neutrons and is less important for the forward-scattered neutrons. A three-dimensional hydrodynamic simulation of a cryogenic implosion on the OMEGA Laser System [Boehly et al., Opt. Commun. 133, 495 (1997)] using the hydrodynamic code HYDRA [Marinak et al., Phys. Plasmas 8, 2275 (2001)] is post-processed using the tracking code IRIS3D. It is shown that different parts of the neutron spectrum from the view can be mapped into different regions of the implosion, enabling an inference of an areal-density map. It is also shown that the average areal-density and an areal-density map of the compressed target can be reconstructed with a finite number of detectors placed around the target chamber. Ion temperatures are inferred from the width of the D-D and D-T fusion neutron spectra. Backgrounds can significantly alter the inferred ion temperatures from the D-D reaction, whereas they insignificantly influence the inferred D-T ion temperatures for the areal densities typical of OMEGA implosions. Asymmetries resulting in fluid flow in the core are shown to influence the absolute inferred ion temperatures from both reactions, although relative inferred values continue to reflect the underlying asymmetry pattern. The work presented here is part of the wide range of the first set of studies performed with IRIS3D. This code will continue to be used for post-processing detailed hydrodynamic simulations and interpreting observed neutron spectra in ICF implosions.

  2. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  3. Application of the High Gradient hydrodynamics code to simulations of a two-dimensional zero-pressure-gradient turbulent boundary layer over a flat plate

    NASA Astrophysics Data System (ADS)

    Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.

    2013-11-01

    The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.

  4. Numerical simulation of tornado wind loading on structures

    NASA Technical Reports Server (NTRS)

    Maiden, D. E.

    1976-01-01

    A numerical simulation of a tornado interacting with a building was undertaken in order to compare the pressures due to a rotational unsteady wind with that due to steady straight winds used in design of nuclear facilities. The numerical simulations were performed on a two-dimensional compressible hydrodynamics code. Calculated pressure profiles for a typical building were then subjected to a tornado wind field and the results were compared with current quasisteady design calculations. The analysis indicates that current design practices are conservative.

  5. Wave journal bearing with compressible lubricant--Part 1: The wave bearing concept and a comparison to the plain circular bearing

    NASA Technical Reports Server (NTRS)

    Dimofte, Florin

    1995-01-01

    To improve hydrodynamic journal bearing steady-state and dynamic performance, a new bearing concept, the wave journal bearing, was developed at the author's lab. This concept features a waved inner bearing diameter. Compared to other alternative bearing geometries used to improve bearing performance such as spiral or herring-bone grooves, steps, etc., the wave bearing's design is relatively simple and allows the shaft to rotate in either direction. A three-wave bearing operating with a compressible lubricant, i.e., gas is analyzed using a numerical code. Its performance is compared to a plain (truly) circular bearing over a broad range of bearing working parameters, e.g., bearing numbers from 0.01 to 100.

  6. X-ray absorption radiography for high pressure shock wave studies

    NASA Astrophysics Data System (ADS)

    Antonelli, L.; Atzeni, S.; Batani, D.; Baton, S. D.; Brambrink, E.; Forestier-Colleoni, P.; Koenig, M.; Le Bel, E.; Maheut, Y.; Nguyen-Bui, T.; Richetta, M.; Rousseaux, C.; Ribeyre, X.; Schiavi, A.; Trela, J.

    2018-01-01

    The study of laser compressed matter, both warm dense matter (WDM) and hot dense matter (HDM), is relevant to several research areas, including materials science, astrophysics, inertial confinement fusion. X-ray absorption radiography is a unique tool to diagnose compressed WDM and HDM. The application of radiography to shock-wave studies is presented and discussed. In addition to the standard Abel inversion to recover a density map from a transmission map, a procedure has been developed to generate synthetic radiographs using density maps produced by the hydrodynamics code DUED. This procedure takes into account both source-target geometry and source size (which plays a non negligible role in the interpretation of the data), and allows to reproduce transmission data with a good degree of accuracy.

  7. The simulations of indirect-drive targets for ignition on megajoule lasers.

    NASA Astrophysics Data System (ADS)

    Lykov, Vladimir; Andreev, Eugene; Ardasheva, Ludmila; Avramenko, Michael; Chernyakov, Valerian; Chizhkov, Maxim; Karlykhanov, Nikalai; Kozmanov, Michael; Lebedev, Serge; Rykovanov, George; Seleznev, Vladimir; Sokolov, Lev; Timakova, Margaret; Shestakov, Alexander; Shushlebin, Aleksander

    2013-10-01

    The calculations were performed with use of radiation hydrodynamic codes developed in RFNC-VNIITF. The analysis of published calculations of indirect-drive targets to obtain ignition on NIF and LMJ lasers has shown that these targets have very low margins for ignition: according to 1D-ERA code calculations it could not be ignited under decreasing of thermonuclear reaction rate less than in 2 times.The purpose of new calculations is search of indirect-drive targets with the raised margins for ignition. The calculations of compression and thermonuclear burning of targets are carried out for conditions of X-ray flux asymmetry obtained in simulations of Rugby hohlraum that were performed with 2D-SINARA code. The requirements to accuracy of manufacturing and irradiation symmetry of targets were studied with use of 2D-TIGR-OMEGA-3T code. The necessity of performed researches is caused by the construction of magajoule laser in Russia.

  8. Three-dimensional modeling of the neutron spectrum to infer plasma conditions in cryogenic inertial confinement fusion implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weilacher, F.; Radha, P. B.; Forrest, C.

    Neutron-based diagnostics are typically used to infer compressed core conditions such as areal density and ion temperature in deuterium–tritium (D–T) inertial confinement fusion (ICF) implosions. Asymmetries in the observed neutron-related quantities are important to understanding failure modes in these implosions. Neutrons from fusion reactions and their subsequent interactions including elastic scattering and neutron-induced deuteron breakup reactions are tracked to create spectra. Here, it is shown that background subtraction is important for inferring areal density from backscattered neutrons and is less important for the forward-scattered neutrons. A three-dimensional hydrodynamic simulation of a cryogenic implosion on the OMEGA Laser System [T. R.more » Boehly et al., Opt. Commun. 133, 495 (1997)] using the hydrodynamic code HYDRA [M. M. Marinak et al., Phys. Plasmas 8, 2275 (2001)] is post-processed using the tracking code IRIS3D. It is shown that different parts of the neutron spectrum from the view can be mapped into different regions of the implosion, enabling an inference of an areal-density map. It is also shown that the average areal-density and an areal-density map of the compressed target can be reconstructed with a finite number of detectors placed around the target chamber. Ion temperatures are inferred from the width of the D–D and D–T fusion neutron spectra. Backgrounds can significantly alter the inferred ion temperatures from the D–D reaction, whereas they insignificantly influence the inferred D–T ion temperatures for the areal densities typical of OMEGA implosions. Asymmetries resulting in fluid flow in the core are shown to influence the absolute inferred ion temperatures from both reactions, although relative inferred values continue to reflect the underlying asymmetry pattern. The work presented here is part of the wide range of the first set of studies performed with IRIS3D. Finally, this code will continue to be used for post-processing detailed hydrodynamic simulations and interpreting observed neutron spectra in ICF implosions.« less

  9. Three-dimensional modeling of the neutron spectrum to infer plasma conditions in cryogenic inertial confinement fusion implosions

    DOE PAGES

    Weilacher, F.; Radha, P. B.; Forrest, C.

    2018-04-26

    Neutron-based diagnostics are typically used to infer compressed core conditions such as areal density and ion temperature in deuterium–tritium (D–T) inertial confinement fusion (ICF) implosions. Asymmetries in the observed neutron-related quantities are important to understanding failure modes in these implosions. Neutrons from fusion reactions and their subsequent interactions including elastic scattering and neutron-induced deuteron breakup reactions are tracked to create spectra. Here, it is shown that background subtraction is important for inferring areal density from backscattered neutrons and is less important for the forward-scattered neutrons. A three-dimensional hydrodynamic simulation of a cryogenic implosion on the OMEGA Laser System [T. R.more » Boehly et al., Opt. Commun. 133, 495 (1997)] using the hydrodynamic code HYDRA [M. M. Marinak et al., Phys. Plasmas 8, 2275 (2001)] is post-processed using the tracking code IRIS3D. It is shown that different parts of the neutron spectrum from the view can be mapped into different regions of the implosion, enabling an inference of an areal-density map. It is also shown that the average areal-density and an areal-density map of the compressed target can be reconstructed with a finite number of detectors placed around the target chamber. Ion temperatures are inferred from the width of the D–D and D–T fusion neutron spectra. Backgrounds can significantly alter the inferred ion temperatures from the D–D reaction, whereas they insignificantly influence the inferred D–T ion temperatures for the areal densities typical of OMEGA implosions. Asymmetries resulting in fluid flow in the core are shown to influence the absolute inferred ion temperatures from both reactions, although relative inferred values continue to reflect the underlying asymmetry pattern. The work presented here is part of the wide range of the first set of studies performed with IRIS3D. Finally, this code will continue to be used for post-processing detailed hydrodynamic simulations and interpreting observed neutron spectra in ICF implosions.« less

  10. Bulk hydrodynamic stability and turbulent saturation in compressing hot spots

    NASA Astrophysics Data System (ADS)

    Davidovits, Seth; Fisch, Nathaniel J.

    2018-04-01

    For hot spots compressed at constant velocity, we give a hydrodynamic stability criterion that describes the expected energy behavior of non-radial hydrodynamic motion for different classes of trajectories (in ρR — T space). For a given compression velocity, this criterion depends on ρR, T, and d T /d (ρR ) (the trajectory slope) and applies point-wise so that the expected behavior can be determined instantaneously along the trajectory. Among the classes of trajectories are those where the hydromotion is guaranteed to decrease and those where the hydromotion is bounded by a saturated value. We calculate this saturated value and find the compression velocities for which hydromotion may be a substantial fraction of hot-spot energy at burn time. The Lindl (Phys. Plasmas 2, 3933 (1995)] "attractor" trajectory is shown to experience non-radial hydrodynamic energy that grows towards this saturated state. Comparing the saturation value with the available detailed 3D simulation results, we find that the fluctuating velocities in these simulations reach substantial fractions of the saturated value.

  11. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  12. A well-balanced finite volume scheme for the Euler equations with gravitation. The exact preservation of hydrostatic equilibrium with arbitrary entropy stratification

    NASA Astrophysics Data System (ADS)

    Käppeli, R.; Mishra, S.

    2016-03-01

    Context. Many problems in astrophysics feature flows which are close to hydrostatic equilibrium. However, standard numerical schemes for compressible hydrodynamics may be deficient in approximating this stationary state, where the pressure gradient is nearly balanced by gravitational forces. Aims: We aim to develop a second-order well-balanced scheme for the Euler equations. The scheme is designed to mimic a discrete version of the hydrostatic balance. It therefore can resolve a discrete hydrostatic equilibrium exactly (up to machine precision) and propagate perturbations, on top of this equilibrium, very accurately. Methods: A local second-order hydrostatic equilibrium preserving pressure reconstruction is developed. Combined with a standard central gravitational source term discretization and numerical fluxes that resolve stationary contact discontinuities exactly, the well-balanced property is achieved. Results: The resulting well-balanced scheme is robust and simple enough to be very easily implemented within any existing computer code that solves time explicitly or implicitly the compressible hydrodynamics equations. We demonstrate the performance of the well-balanced scheme for several astrophysically relevant applications: wave propagation in stellar atmospheres, a toy model for core-collapse supernovae, convection in carbon shell burning, and a realistic proto-neutron star.

  13. Experimental design to understand the interaction of stellar radiation with molecular clouds

    NASA Astrophysics Data System (ADS)

    VanDervort, Robert; Davis, Josh; Trantham, Matt; Klein, Sallee; Frank, Yechiel; Raicher, Erez; Fraenkel, Moshe; Shvarts, Dov; Keiter, Paul; Drake, R. Paul

    2017-06-01

    Enhanced star formation triggered by local O and B type stars is an astrophysical problem of interest. O and B type stars are massive, hot stars that emit an enormous amount of radiation. This radiation acts to either compress or blow apart clumps of gas in the interstellar media. For example, in the optically thick limit, when the x-ray radiation in the gas clump has a short mean free path length the x-ray radiation is absorbed near the clump edge and compresses the clump. In the optically thin limit, when the mean free path is long, the radiation is absorbed throughout acting to heat the clump. This heating explodes the gas clump. Careful selection of parameters, such as foam density or source temperature, allow the experimental platform to access different hydrodynamic regimes. The stellar radiation source is mimicked by a laser irradiated thin gold foil. This will provide a source of thermal x-rays (around ~100 eV). The gas clump is mimicked by a low-density foam around 0.150 g/cc. Simulations were done using radiation hydrodynamics codes to tune the experimental parameters. The experiment will be carried out at the Omega laser facility on OMEGA 60.

  14. TESS: A RELATIVISTIC HYDRODYNAMICS CODE ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duffell, Paul C.; MacFadyen, Andrew I., E-mail: pcd233@nyu.edu, E-mail: macfadyen@nyu.edu

    2011-12-01

    We have generalized a method for the numerical solution of hyperbolic systems of equations using a dynamic Voronoi tessellation of the computational domain. The Voronoi tessellation is used to generate moving computational meshes for the solution of multidimensional systems of conservation laws in finite-volume form. The mesh-generating points are free to move with arbitrary velocity, with the choice of zero velocity resulting in an Eulerian formulation. Moving the points at the local fluid velocity makes the formulation effectively Lagrangian. We have written the TESS code to solve the equations of compressible hydrodynamics and magnetohydrodynamics for both relativistic and non-relativistic fluidsmore » on a dynamic Voronoi mesh. When run in Lagrangian mode, TESS is significantly less diffusive than fixed mesh codes and thus preserves contact discontinuities to high precision while also accurately capturing strong shock waves. TESS is written for Cartesian, spherical, and cylindrical coordinates and is modular so that auxiliary physics solvers are readily integrated into the TESS framework and so that this can be readily adapted to solve general systems of equations. We present results from a series of test problems to demonstrate the performance of TESS and to highlight some of the advantages of the dynamic tessellation method for solving challenging problems in astrophysical fluid dynamics.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yanagawa, T.; Sakagami, H.; Nagatomo, H.

    In inertial confinement fusion, the implosion process is important in forming a high-density plasma core. In the case of a fast ignition scheme using a cone-guided target, the fuel target is imploded with a cone inserted. This scheme is advantageous for efficiently heating the imploded fuel core; however, asymmetric implosion is essentially inevitable. Moreover, the effect of cone position and opening angle on implosion also becomes critical. Focusing on these problems, the effect of the asymmetric implosion, the initial position, and the opening angle on the compression rate of the fuel is investigated using a three-dimensional pure hydrodynamic code.

  16. Comparisons of CTH simulations with measured wave profiles for simple flyer plate experiments

    DOE PAGES

    Thomas, S. A.; Veeser, L. R.; Turley, W. D.; ...

    2016-06-13

    We conducted detailed 2-dimensional hydrodynamics calculations to assess the quality of simulations commonly used to design and analyze simple shock compression experiments. Such simple shock experiments also contain data where dynamic properties of materials are integrated together. We wished to assess how well the chosen computer hydrodynamic code could do at capturing both the simple parts of the experiments and the integral parts. We began with very simple shock experiments, in which we examined the effects of the equation of state and the compressional and tensile strength models. We increased complexity to include spallation in copper and iron and amore » solid-solid phase transformation in iron to assess the quality of the damage and phase transformation simulations. For experiments with a window, the response of both the sample and the window are integrated together, providing a good test of the material models. While CTH physics models are not perfect and do not reproduce all experimental details well, we find the models are useful; the simulations are adequate for understanding much of the dynamic process and for planning experiments. However, higher complexity in the simulations, such as adding in spall, led to greater differences between simulation and experiment. Lastly, this comparison of simulation to experiment may help guide future development of hydrodynamics codes so that they better capture the underlying physics.« less

  17. Revealing the Physics of Galactic Winds Through Massively-Parallel Hydrodynamics Simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Evan Elizabeth

    This thesis documents the hydrodynamics code Cholla and a numerical study of multiphase galactic winds. Cholla is a massively-parallel, GPU-based code designed for astrophysical simulations that is freely available to the astrophysics community. A static-mesh Eulerian code, Cholla is ideally suited to carrying out massive simulations (> 20483 cells) that require very high resolution. The code incorporates state-of-the-art hydrodynamics algorithms including third-order spatial reconstruction, exact and linearized Riemann solvers, and unsplit integration algorithms that account for transverse fluxes on multidimensional grids. Operator-split radiative cooling and a dual-energy formalism for high mach number flows are also included. An extensive test suite demonstrates Cholla's superior ability to model shocks and discontinuities, while the GPU-native design makes the code extremely computationally efficient - speeds of 5-10 million cell updates per GPU-second are typical on current hardware for 3D simulations with all of the aforementioned physics. The latter half of this work comprises a comprehensive study of the mixing between a hot, supernova-driven wind and cooler clouds representative of those observed in multiphase galactic winds. Both adiabatic and radiatively-cooling clouds are investigated. The analytic theory of cloud-crushing is applied to the problem, and adiabatic turbulent clouds are found to be mixed with the hot wind on similar timescales as the classic spherical case (4-5 t cc) with an appropriate rescaling of the cloud-crushing time. Radiatively cooling clouds survive considerably longer, and the differences in evolution between turbulent and spherical clouds cannot be reconciled with a simple rescaling. The rapid incorporation of low-density material into the hot wind implies efficient mass-loading of hot phases of galactic winds. At the same time, the extreme compression of high-density cloud material leads to long-lived but slow-moving clumps that are unlikely to escape the galaxy.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less

  19. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the user inputs information that relates the fluid transport properties to the temperature.

  20. Numerical studies of the use of thin high-Z layers for reducing laser imprint in direct-drive inertial-fusion targets

    NASA Astrophysics Data System (ADS)

    Bates, Jason; Schmitt, Andrew; Karasik, Max; Obenschain, Steve

    2012-10-01

    Using the FAST code, we present numerical studies of the effect of thin metallic layers with high atomic number (high-Z) on the hydrodynamics of directly-driven inertial-confinement-fusion (ICF) targets. Previous experimental work on the NIKE Laser Facility at the U.S. Naval Research Laboratory demonstrated that the use of high-Z layers may be efficacious in reducing laser non-uniformities imprinted on the target during the start-up phase of the implosion. Such a reduction is highly desirable in a direct-drive ICF scenario because laser non-uniformities seed hydrodynamic instabilities that can amplify during the implosion process, prevent uniform compression and spoil high gain. One of the main objectives of the present work is to assess the utility of high-Z layers for achieving greater laser uniformity in polar-drive target designs planned for the National Ignition Facility. To address this problem, new numerical routines have recently been incorporated in the FAST code, including an improved radiation-transfer package and a three-dimensional ray-tracing algorithm. We will discuss these topics, and present initial simulation results for high-Z planar-target experiments planned on the NIKE Laser Facility later this year.

  1. Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry

    DOEpatents

    Dietle, Lannie; Gobeli, Jeffrey D.

    1993-07-27

    A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.

  2. New numerical solutions of three-dimensional compressible hydrodynamic convection. [in stars

    NASA Technical Reports Server (NTRS)

    Hossain, Murshed; Mullan, D. J.

    1990-01-01

    Numerical solutions of three-dimensional compressible hydrodynamics (including sound waves) in a stratified medium with open boundaries are presented. Convergent/divergent points play a controlling role in the flows, which are dominated by a single frequency related to the mean sound crossing time. Superposed on these rapid compressive flows, slower eddy-like flows eventually create convective transport. The solutions contain small structures stacked on top of larger ones, with vertical scales equal to the local pressure scale heights, H sub p. Although convective transport starts later in the evolution, vertical scales of H sub p are apparently selected at much earlier times by nonlinear compressive effects.

  3. The Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; Wood, K.

    2018-04-01

    We present the public Monte Carlo photoionization and moving-mesh radiation hydrodynamics code CMACIONIZE, which can be used to simulate the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given type, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code, but also as a moving-mesh code.

  4. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  5. Frequency-dependent hydrodynamic interaction between two solid spheres

    NASA Astrophysics Data System (ADS)

    Jung, Gerhard; Schmid, Friederike

    2017-12-01

    Hydrodynamic interactions play an important role in many areas of soft matter science. In simulations with implicit solvent, various techniques such as Brownian or Stokesian dynamics explicitly include hydrodynamic interactions a posteriori by using hydrodynamic diffusion tensors derived from the Stokes equation. However, this equation assumes the interaction to be instantaneous which is an idealized approximation and only valid on long time scales. In the present paper, we go one step further and analyze the time-dependence of hydrodynamic interactions between finite-sized particles in a compressible fluid on the basis of the linearized Navier-Stokes equation. The theoretical results show that at high frequencies, the compressibility of the fluid has a significant impact on the frequency-dependent pair interactions. The predictions of hydrodynamic theory are compared to molecular dynamics simulations of two nanocolloids in a Lennard-Jones fluid. For this system, we reconstruct memory functions by extending the inverse Volterra technique. The simulation data agree very well with the theory, therefore, the theory can be used to implement dynamically consistent hydrodynamic interactions in the increasingly popular field of non-Markovian modeling.

  6. Evaluation of Multi-Vessel Ship Motion Prediction Codes

    DTIC Science & Technology

    2008-09-01

    each other, and accounting for the hydrodynamic effects between the hulls. The major differences in the capabilities of the codes were in the non...Figure 28. Effects of irregular frequency smoothing has on the resultant pitch transfer function for three meter separation, 135 degree heading, and...and accounting for the hydrodynamic effects between the hulls. The major differences in the capabilities of the codes were in the non-hydrodynamic

  7. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    NASA Astrophysics Data System (ADS)

    Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.

    2017-08-01

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.

  8. Lithium Depletion in Solar-like Stars: Effect of Overshooting Based on Realistic Multi-dimensional Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baraffe, I.; Pratt, J.; Goffrey, T.

    We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a youngmore » low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.« less

  9. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  10. The effects of wedge roughness on Mach formation

    NASA Astrophysics Data System (ADS)

    Needham, C. E.; Happ, H. J.; Dawson, D. F.

    A modified HULL hydrodynamic model was used to simulate shock reflection on wedges fitted with bumps representing varying degrees of roughness. The protuberances ranged from 0.02-0.2 cm in size. The study was directed at the feasibility of and techniques for defining parametric fits for surface roughness in the HULL code. Of interest was the self-similarity of the flows, so increasingly larger protuberances would simply enhance the resolution of the calculations. The code was designed for compressible, inviscid, nonconducting fluid flows. An equation of state provides closure and a finite difference algorithm is applied to solve governing equations for conservation of mass, momentum and energy. Self-similarity failed as the surface bumps grew larger and protruded further into the flowfield. It is noted that bumps spaced further apart produced greater interference for the passage of the Mach stem than did bumps placed closer together.

  11. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  12. Extended x-ray absorption fine structure measurements of quasi-isentropically compressed vanadium targets on the OMEGA laser

    NASA Astrophysics Data System (ADS)

    Yaakobi, B.; Boehly, T. R.; Sangster, T. C.; Meyerhofer, D. D.; Remington, B. A.; Allen, P. G.; Pollaine, S. M.; Lorenzana, H. E.; Lorenz, K. T.; Hawreliak, J. A.

    2008-06-01

    The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to ˜0.75Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.

  13. CMacIonize: Monte Carlo photoionisation and moving-mesh radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Bert; Wood, Kenneth

    2018-02-01

    CMacIonize simulates the self-consistent evolution of HII regions surrounding young O and B stars, or other sources of ionizing radiation. The code combines a Monte Carlo photoionization algorithm that uses a complex mix of hydrogen, helium and several coolants in order to self-consistently solve for the ionization and temperature balance at any given time, with a standard first order hydrodynamics scheme. The code can be run as a post-processing tool to get the line emission from an existing simulation snapshot, but can also be used to run full radiation hydrodynamical simulations. Both the radiation transfer and the hydrodynamics are implemented in a general way that is independent of the grid structure that is used to discretize the system, allowing it to be run both as a standard fixed grid code and also as a moving-mesh code.

  14. A hydrodynamic approach to cosmology - Methodology

    NASA Technical Reports Server (NTRS)

    Cen, Renyue

    1992-01-01

    The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.

  15. Numerical study of core formation of asymmetrically driven cone-guided targets

    DOE PAGES

    Sawada, Hiroshi; Sakagami, Hitoshi

    2017-09-22

    Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less

  16. Numerical study of core formation of asymmetrically driven cone-guided targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, Hiroshi; Sakagami, Hitoshi

    Compression of a directly driven fast ignition cone-sphere target with a finite number of laser beams is numerically studied using a three-dimensional hydrodynamics code IMPACT-3D. The formation of a dense plasma core is simulated for 12-, 9-, 6-, and 4-beam configurations of the GEKKO XII laser. The complex 3D shapes of the cores are analyzed by elucidating synthetic 2D x-ray radiographic images in two orthogonal directions. Finally, the simulated x-ray images show significant differences in the core shape between the two viewing directions and rotation of the stagnating core axis in the top view for the axisymmetric 9- and 6-beammore » configurations.« less

  17. Numerical analysis of the effects of radiation heat transfer and ionization energy loss on the cavitation Bubble's dynamics

    NASA Astrophysics Data System (ADS)

    Mahdi, M.; Ebrahimi, R.; Shams, M.

    2011-06-01

    A numerical scheme for simulating the acoustic and hydrodynamic cavitation was developed. Bubble instantaneous radius was obtained using Gilmore equation which considered the compressibility of the liquid. A uniform temperature was assumed for the inside gas during the collapse. Radiation heat transfer inside the bubble and the heat conduction to the bubble was considered. The numerical code was validated with the experimental data and a good correspondence was observed. The dynamics of hydrofoil cavitation bubble were also investigated. It was concluded that the thermal radiation heat transfer rate strongly depended on the cavitation number, initial bubble radius and hydrofoil angle of attack.

  18. Warm Dense Matter: Another Application for Pulsed Power Hydrodynamics

    DTIC Science & Technology

    2009-06-01

    Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to...controlled than are similar high explosively powered hydrodynamic experiments. While the precision and controllability of gas- gun experiments is...well established, pulsed power techniques using imploding liner offer access to convergent conditions, difficult to obtain with guns – and essential

  19. Smoothed Particle Hydrodynamic Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2016-10-05

    This code is a highly modular framework for developing smoothed particle hydrodynamic (SPH) simulations running on parallel platforms. The compartmentalization of the code allows for rapid development of new SPH applications and modifications of existing algorithms. The compartmentalization also allows changes in one part of the code used by many applications to instantly be made available to all applications.

  20. A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jeffrey

    2006-04-01

    Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical andmore » physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.« less

  1. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  2. Adaptive mesh fluid simulations on GPU

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Abel, Tom; Kaehler, Ralf

    2010-10-01

    We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.

  3. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident onmore » an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.« less

  4. Simulating Coupling Complexity in Space Plasmas: First Results from a new code

    NASA Astrophysics Data System (ADS)

    Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.

    2005-12-01

    The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal mass ejection and interplanetary shock propagation model for the inner and outer heliosphere, including, at a test-particle level, wave-particle interactions and particle acceleration at traveling shock waves and compression regions. 3) To develop an advanced Geospace General Circulation Model (GGCM) capable of realistically modeling space weather events, in particular the interaction with CMEs and geomagnetic storms. Furthermore, by implementing scalable run-time supports and sophisticated off- and on-line prediction algorithms, we anticipate important advances in the development of automatic and intelligent system software to optimize a wide variety of 'embedded' computations on parallel computers. Finally, public domain MHD and hydrodynamic codes had a transforming effect on space and astrophysics. We expect that our new generation, open source, public domain multi-scale code will have a similar transformational effect in a variety of disciplines, opening up new classes of problems to physicists and engineers alike.

  5. Shadowfax: Moving mesh hydrodynamical integration code

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  6. Effects of thermal fluctuations and fluid compressibility on hydrodynamic synchronization of microrotors at finite oscillatory Reynolds number: a multiparticle collision dynamics simulation study.

    PubMed

    Theers, Mario; Winkler, Roland G

    2014-08-28

    We investigate the emergent dynamical behavior of hydrodynamically coupled microrotors by means of multiparticle collision dynamics (MPC) simulations. The two rotors are confined in a plane and move along circles driven by active forces. Comparing simulations to theoretical results based on linearized hydrodynamics, we demonstrate that time-dependent hydrodynamic interactions lead to synchronization of the rotational motion. Thermal noise implies large fluctuations of the phase-angle difference between the rotors, but synchronization prevails and the ensemble-averaged time dependence of the phase-angle difference agrees well with analytical predictions. Moreover, we demonstrate that compressibility effects lead to longer synchronization times. In addition, the relevance of the inertia terms of the Navier-Stokes equation are discussed, specifically the linear unsteady acceleration term characterized by the oscillatory Reynolds number ReT. We illustrate the continuous breakdown of synchronization with the Reynolds number ReT, in analogy to the continuous breakdown of the scallop theorem with decreasing Reynolds number.

  7. Data Compression Techniques for Maps

    DTIC Science & Technology

    1989-01-01

    Lempel - Ziv compression is applied to the classified and unclassified images as also to the output of the compression algorithms . The algorithms ...resulted in a compression of 7:1. The output of the quadtree coding algorithm was then compressed using Lempel - Ziv coding. The compression ratio achieved...using Lempel - Ziv coding. The unclassified image gave a compression ratio of only 1.4:1. The K means classified image

  8. General Relativistic Smoothed Particle Hydrodynamics code developments: A progress report

    NASA Astrophysics Data System (ADS)

    Faber, Joshua; Silberman, Zachary; Rizzo, Monica

    2017-01-01

    We report on our progress in developing a new general relativistic Smoothed Particle Hydrodynamics (SPH) code, which will be appropriate for studying the properties of accretion disks around black holes as well as compact object binary mergers and their ejecta. We will discuss in turn the relativistic formalisms being used to handle the evolution, our techniques for dealing with conservative and primitive variables, as well as those used to ensure proper conservation of various physical quantities. Code tests and performance metrics will be discussed, as will the prospects for including smoothed particle hydrodynamics codes within other numerical relativity codebases, particularly the publicly available Einstein Toolkit. We acknowledge support from NSF award ACI-1550436 and an internal RIT D-RIG grant.

  9. Displaying radiologic images on personal computers: image storage and compression--Part 2.

    PubMed

    Gillespy, T; Rowberg, A H

    1994-02-01

    This is part 2 of our article on image storage and compression, the third article of our series for radiologists and imaging scientists on displaying, manipulating, and analyzing radiologic images on personal computers. Image compression is classified as lossless (nondestructive) or lossy (destructive). Common lossless compression algorithms include variable-length bit codes (Huffman codes and variants), dictionary-based compression (Lempel-Ziv variants), and arithmetic coding. Huffman codes and the Lempel-Ziv-Welch (LZW) algorithm are commonly used for image compression. All of these compression methods are enhanced if the image has been transformed into a differential image based on a differential pulse-code modulation (DPCM) algorithm. The LZW compression after the DPCM image transformation performed the best on our example images, and performed almost as well as the best of the three commercial compression programs tested. Lossy compression techniques are capable of much higher data compression, but reduced image quality and compression artifacts may be noticeable. Lossy compression is comprised of three steps: transformation, quantization, and coding. Two commonly used transformation methods are the discrete cosine transformation and discrete wavelet transformation. In both methods, most of the image information is contained in a relatively few of the transformation coefficients. The quantization step reduces many of the lower order coefficients to 0, which greatly improves the efficiency of the coding (compression) step. In fractal-based image compression, image patterns are stored as equations that can be reconstructed at different levels of resolution.

  10. Lossless compression of VLSI layout image data.

    PubMed

    Dai, Vito; Zakhor, Avideh

    2006-09-01

    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  11. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  12. Prototype Mixed Finite Element Hydrodynamics Capability in ARES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rieben, R N

    This document describes work on a prototype Mixed Finite Element Method (MFEM) hydrodynamics algorithm in the ARES code, and its application to a set of standard test problems. This work is motivated by the need for improvements to the algorithms used in the Lagrange hydrodynamics step to make them more robust. We begin by identifying the outstanding issues with traditional numerical hydrodynamics algorithms followed by a description of the proposed method and how it may address several of these longstanding issues. We give a theoretical overview of the proposed MFEM algorithm as well as a summary of the coding additionsmore » and modifications that were made to add this capability to the ARES code. We present results obtained with the new method on a set of canonical hydrodynamics test problems and demonstrate significant improvement in comparison to results obtained with traditional methods. We conclude with a summary of the issues still at hand and motivate the need for continued research to develop the proposed method into maturity.« less

  13. Low torque hydrodynamic lip geometry for bi-directional rotation seals

    DOEpatents

    Dietle, Lannie L [Houston, TX; Schroeder, John E [Richmond, TX

    2009-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  14. Low torque hydrodynamic lip geometry for rotary seals

    DOEpatents

    Dietle, Lannie L.; Schroeder, John E.

    2015-07-21

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  15. Low torque hydrodynamic lip geometry for bi-directional rotation seals

    DOEpatents

    Dietle, Lannie L [Houston, TX; Schroeder, John E [Richmond, TX

    2011-11-15

    A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.

  16. Using Pulsed Power for Hydrodynamic Code Validation

    DTIC Science & Technology

    2001-06-01

    Air Force Research Laboratory ( AFRL ). A...bank at the Air Force Research Laboratory ( AFRL ). A cylindrical aluminum liner that is magnetically imploded onto a central target by self-induced...James Degnan, George Kiuttu Air Force Research Laboratory Albuquerque, NM 87117 Abstract As part of ongoing hydrodynamic code

  17. Solitonic Dispersive Hydrodynamics: Theory and Observation

    NASA Astrophysics Data System (ADS)

    Maiden, Michelle D.; Anderson, Dalton V.; Franco, Nevil A.; El, Gennady A.; Hoefer, Mark A.

    2018-04-01

    Ubiquitous nonlinear waves in dispersive media include localized solitons and extended hydrodynamic states such as dispersive shock waves. Despite their physical prominence and the development of thorough theoretical and experimental investigations of each separately, experiments and a unified theory of solitons and dispersive hydrodynamics are lacking. Here, a general soliton-mean field theory is introduced and used to describe the propagation of solitons in macroscopic hydrodynamic flows. Two universal adiabatic invariants of motion are identified that predict trapping or transmission of solitons by hydrodynamic states. The result of solitons incident upon smooth expansion waves or compressive, rapidly oscillating dispersive shock waves is the same, an effect termed hydrodynamic reciprocity. Experiments on viscous fluid conduits quantitatively confirm the soliton-mean field theory with broader implications for nonlinear optics, superfluids, geophysical fluids, and other dispersive hydrodynamic media.

  18. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Holst, B.; Toth, G.; Sokolov, I. V.

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less

  19. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  20. Invariant Functional Forms for K(r,P) Type Equations of State for Hydrodynamically Driven Flow

    NASA Astrophysics Data System (ADS)

    Hrbek, George

    2001-06-01

    At the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter, Group Theoretic Methods, as defined by Lie were applied to the problem of temperature independent, hydrodynamic shock in a Birch-Murnaghan continuum. (1) Group parameter ratios were linked to the physical quantities (i.e., KT, K'T, and K''T) specified for the various order Birch-Murnaghan approximations. This technique has now been generalized to provide a mathematical formalism applicable to a wide class of forms (i.e., K(r,P)) for the equation of state. Variations in material expansion and resistance (i.e., counter pressure) are shown to be functions of compression and material variation ahead of the expanding front. Illustrative examples include the Birch-Murnaghan, Vinet, Brennan-Stacey, Shanker, Tait, Poirier, and Jones-Wilkins-Lee (JWL) forms. The results of this study will allow the various equations of state, and their respective fitting coefficients, to be compared with experiments. To do this, one must introduce the group ratios into a numerical simulation for the flow and generate the density, pressure, and particle velocity profiles as the shock moves through the material. (2) (1) Hrbek, G. M., Invariant Functional Forms For The Second, Third, And Fourth Order Birch-Murnaghan Equation of State For Materials Subject to Hydrodynamic Shock, Proceedings of the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 99), Snowbird, Utah (2) Hrbek, G. M., Physical Interpretation of Mathematically Invariant K(r,P) Type Equations Of State For Hydrodynamically Driven Flows, Submitted to the 12th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 01), Atlanta, Georgia

  1. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  2. Computer modeling and simulation in inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCrory, R.L.; Verdon, C.P.

    1989-03-01

    The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less

  3. Stability and nonlinear adjustment of vortices in Keplerian flows

    NASA Astrophysics Data System (ADS)

    Bodo, G.; Tevzadze, A.; Chagelishvili, G.; Mignone, A.; Rossi, P.; Ferrari, A.

    2007-11-01

    Aims:We investigate the stability, nonlinear development and equilibrium structure of vortices in a background shearing Keplerian flow Methods: We make use of high-resolution global two-dimensional compressible hydrodynamic simulations. We introduce the concept of nonlinear adjustment to describe the transition of unbalanced vortical fields to a long-lived configuration. Results: We discuss the conditions under which vortical perturbations evolve into long-lived persistent structures and we describe the properties of these equilibrium vortices. The properties of equilibrium vortices appear to be independent from the initial conditions and depend only on the local disk parameters. In particular we find that the ratio of the vortex size to the local disk scale height increases with the decrease of the sound speed, reaching values well above the unity. The process of spiral density wave generation by the vortex, discussed in our previous work, appear to maintain its efficiency also at nonlinear amplitudes and we observe the formation of spiral shocks attached to the vortex. The shocks may have important consequences on the long term vortex evolution and possibly on the global disk dynamics. Conclusions: Our study strengthens the arguments in favor of anticyclonic vortices as the candidates for the promotion of planetary formation. Hydrodynamic shocks that are an intrinsic property of persistent vortices in compressible Keplerian flows are an important contributor to the overall balance. These shocks support vortices against viscous dissipation by generating local potential vorticity and should be responsible for the eventual fate of the persistent anticyclonic vortices. Numerical codes have be able to resolve shock waves to describe the vortex dynamics correctly.

  4. Experimental design to understand the interaction of stellar radiation with molecular clouds

    NASA Astrophysics Data System (ADS)

    Vandervort, Robert; Davis, Josh; Trantham, Matt; Klein, Sallee; Frank, Yechiel; Raicher, Erez; Fraenkel, Moshe; Shvarts, Dov; Keiter, Paul; Drake, R. Paul

    2016-10-01

    Enhanced star formation triggered by local O and B type stars is an astrophysical problem of interest. O and B type stars are massive, hot stars that emit an enormous amount of radiation. This radiation acts to either compress or blow apart clumps of gas in the interstellar media. For example, in the optically thick limit, when the x-ray radiation in the gas clump has a short mean free path length the x-ray radiation is absorbed near the clump edge and compresses the clump. In the optically thin limit, when the mean free path is long, the radiation is absorbed throughout acting to heat the clump. This heating explodes the gas clump. Careful selection of parameters, such as foam density or source temperature, allow the experimental platform to access different hydrodynamic regimes. The stellar radiation source is mimicked by a laser irradiated thin gold foil. This will provide a source of thermal x-rays (around 100 eV). The gas clump is mimicked by a low-density foam around 0.12 g/cc. Simulations were done using radiation hydrodynamics codes to tune the experimental parameters. The experiment will be carried out at the Omega laser facility on OMEGA 60. Funding acknowledgements: This work is funded by the U.S. DOE, through the NNSA-DS and SC-OFES Joint Program in HEDPLP, Grant No. DE-NA0001840, and the NLUF Program, Grant No. DE-NA0000850, and through LLE, University of Rochester by the NNSA/OICF under Agreement No. DE-FC52-08NA28302.

  5. Performance of data-compression codes in channels with errors. Final report, October 1986-January 1987

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1987-10-01

    Huffman codes, comma-free codes, and block codes with shift indicators are important candidate-message compression codes for improving the efficiency of communications systems. This study was undertaken to determine if these codes could be used to increase the thruput of the fixed very-low-frequency (FVLF) communication system. This applications involves the use of compression codes in a channel with errors.

  6. Range shortening, radiation transport, and Rayleigh-Taylor instability phenomena in ion-beam-driven inertial-fusion-reactor-size targets: Implosion, ignition, and burn phases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, K.A.; Tahir, N.A.

    In this paper we present an analysis of the theory of the energy deposition of ions in cold materials and hot dense plasmas together with numerical calculations for heavy and light ions of interest to ion-beam fusion. We have used the g-smcapso-smcapsr-smcapsg-smcapso-smcapsn-smcaps computer code of Long, Moritz, and Tahir (which is an extension of the code originally written for protons by Nardi, Peleg, and Zinamon) to carry out these calculations. The energy-deposition data calculated in this manner has been used in the design of heavy-ion-beam-driven fusion targets suitable for a reactor, by its inclusion in the m-smcapse-smcapsd-smcapsu-smcapss-smcapsa-smcaps code of Christiansen,more » Ashby, and Roberts as extended by Tahir and Long. A number of other improvements have been made in this code and these are also discussed. Various aspects of the theoretical analysis of such targets are discussed including the calculation of the hydrodynamic stability, the hydrodynamic efficiency, and the gain. Various different target designs have been used, some of them new. In general these targets are driven by Bi/sup +/ ions of energy 8--12 GeV, with an input energy of 4--6.5 MJ, with output energies in the range 600--900 MJ, and with gains in the range 120--180. The peak powers are in the range of 500--750 TW. We present detailed calculations of the ablation, compression, ignition, and burn phases. By the application of a new stability analysis which includes ablation and density-gradient effects we show that these targets appear to implode in a stable manner. Thus the targets designed offer working examples suited for use in a future inertial-confinement fusion reactor.« less

  7. Simulations of Converging Shock Collisions for Shock Ignition

    NASA Astrophysics Data System (ADS)

    Sauppe, Joshua; Dodd, Evan; Loomis, Eric

    2016-10-01

    Shock ignition (SI) has been proposed as an alternative to achieving high gain in inertial confinement fusion (ICF) targets. A central hot spot below the ignition threshold is created by an initial compression pulse, and a second laser pulse drives a strong converging shock into the fuel. The collision between the rebounding shock from the compression pulse and the converging shock results in amplification of the converging shock and increases the hot spot pressure above the ignition threshold. We investigate shock collision in SI drive schemes for cylindrical targets with a polystyrene foam interior using radiation-hydrodynamics simulations with the RAGE code. The configuration is similar to previous targets fielded on the Omega laser. The CH interior results in a lower convergence ratio and the cylindrical geometry facilitates visualization of the shock transit using an axial X-ray backlighter, both of which are important for comparison to potential experimental measurements. One-dimensional simulations are used to determine shock timing, and the effects of low mode asymmetries in 2D computations are also quantified. LA-UR-16-24773.

  8. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  9. ICF Implosions, Space-Charge Electric Fields, and Their Impact on Mix and Compression

    NASA Astrophysics Data System (ADS)

    Knoll, Dana; Chacon, Luis; Simakov, Andrei

    2013-10-01

    The single-fluid, quasi-neutral, radiation hydrodynamics codes, used to design the NIF targets, predict thermonuclear ignition for the conditions that have been achieved experimentally. A logical conclusion is that the physics model used in these codes is missing one, or more, key phenomena. Two key model-experiment inconsistencies on NIF are: 1) a lower implosion velocity than predicted by the design codes, and 2) transport of pusher material deep into the hot spot. We hypothesize that both of these model-experiment inconsistencies may be a result of a large, space-charge, electric field residing on the distinct interfaces in a NIF target. Large space-charge fields have been experimentally observed in Omega experiments. Given our hypothesis, this presentation will: 1) Develop a more complete physics picture of initiation, sustainment, and dissipation of a current-driven plasma sheath / double-layer at the Fuel-Pusher interface of an ablating plastic shell implosion on Omega, 2) Characterize the mix that can result from a double-layer field at the Fuel-Pusher interface, prior to the onset of fluid instabilities, and 3) Quantify the impact of the double-layer induced surface tension at the Fuel-Pusher interface on the peak observed implosion velocity in Omega.

  10. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  11. Convective penetration in a young sun

    NASA Astrophysics Data System (ADS)

    Pratt, Jane; Baraffe, Isabelle; Goffrey, Tom; MUSIC developers group

    2018-01-01

    To interpret the high-quality data produced from recent space-missions it is necessary to study convection under realistic stellar conditions. We describe the multi-dimensional, time implicit, fully compressible, hydrodynamic, implicit large eddy simulation code MUSIC. We use MUSIC to study convection during an early stage in the evolution of our sun where the convection zone covers approximately half of the solar radius. This model of the young sun possesses a realistic stratification in density, temperature, and luminosity. We approach convection in a stellar context using extreme value theory and derive a new model for convective penetration, targeted for one-dimensional stellar evolution calculations. This model provides a scenario that can explain the observed lithium abundance in the sun and in solar-like stars at a range of ages.

  12. Recent advances in coding theory for near error-free communications

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.

    1991-01-01

    Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.

  13. A Vorticity-preserving Hydrodynamical Scheme for Modeling Accretion Disk Flows

    NASA Astrophysics Data System (ADS)

    Seligman, Darryl; Laughlin, Gregory

    2017-10-01

    Vortices, turbulence, and unsteady nonlaminar flows are likely both prominent and dynamically important features of astrophysical disks. Such strongly nonlinear phenomena are often difficult, however, to simulate accurately, and are generally amenable to analytic treatment only in idealized form. In this paper, we explore the evolution of compressible two-dimensional flows using an implicit dual-time hydrodynamical scheme that strictly conserves vorticity (if applied to simulate inviscid flows for which Kelvin’s Circulation Theorem is applicable). The algorithm is based on the work of Lerat et al., who proposed it in the context of terrestrial applications such as the blade-vortex interactions generated by helicopter rotors. We present several tests of Lerat et al.'s vorticity-preserving approach, which we have implemented to second-order accuracy, providing side-by-side comparisons with other algorithms that are frequently used in protostellar disk simulations. The comparison codes include one based on explicit, second-order van Leer advection, one based on spectral methods, and another that implements a higher-order Godunov solver. Our results suggest that the Lerat et al. algorithm will be useful for simulations of astrophysical environments in which vortices play a dynamical role, and where strong shocks are not expected.

  14. Protostellar hydrodynamics: Constructing and testing a spacially and temporally second-order accurate method. 2: Cartesian coordinates

    NASA Technical Reports Server (NTRS)

    Myhill, Elizabeth A.; Boss, Alan P.

    1993-01-01

    In Boss & Myhill (1992) we described the derivation and testing of a spherical coordinate-based scheme for solving the hydrodynamic equations governing the gravitational collapse of nonisothermal, nonmagnetic, inviscid, radiative, three-dimensional protostellar clouds. Here we discuss a Cartesian coordinate-based scheme based on the same set of hydrodynamic equations. As with the spherical coorrdinate-based code, the Cartesian coordinate-based scheme employs explicit Eulerian methods which are both spatially and temporally second-order accurate. We begin by describing the hydrodynamic equations in Cartesian coordinates and the numerical methods used in this particular code. Following Finn & Hawley (1989), we pay special attention to the proper implementations of high-order accuracy, finite difference methods. We evaluate the ability of the Cartesian scheme to handle shock propagation problems, and through convergence testing, we show that the code is indeed second-order accurate. To compare the Cartesian scheme discussed here with the spherical coordinate-based scheme discussed in Boss & Myhill (1992), the two codes are used to calculate the standard isothermal collapse test case described by Bodenheimer & Boss (1981). We find that with the improved codes, the intermediate bar-configuration found previously disappears, and the cloud fragments directly into a binary protostellar system. Finally, we present the results from both codes of a new test for nonisothermal protostellar collapse.

  15. Converting Panax ginseng DNA and chemical fingerprints into two-dimensional barcode.

    PubMed

    Cai, Yong; Li, Peng; Li, Xi-Wen; Zhao, Jing; Chen, Hai; Yang, Qing; Hu, Hao

    2017-07-01

    In this study, we investigated how to convert the Panax ginseng DNA sequence code and chemical fingerprints into a two-dimensional code. In order to improve the compression efficiency, GATC2Bytes and digital merger compression algorithms are proposed. HPLC chemical fingerprint data of 10 groups of P. ginseng from Northeast China and the internal transcribed spacer 2 (ITS2) sequence code as the DNA sequence code were ready for conversion. In order to convert such data into a two-dimensional code, the following six steps were performed: First, the chemical fingerprint characteristic data sets were obtained through the inflection filtering algorithm. Second, precompression processing of such data sets is undertaken. Third, precompression processing was undertaken with the P. ginseng DNA (ITS2) sequence codes. Fourth, the precompressed chemical fingerprint data and the DNA (ITS2) sequence code were combined in accordance with the set data format. Such combined data can be compressed by Zlib, an open source data compression algorithm. Finally, the compressed data generated a two-dimensional code called a quick response code (QR code). Through the abovementioned converting process, it can be found that the number of bytes needed for storing P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can be greatly reduced. After GTCA2Bytes algorithm processing, the ITS2 compression rate reaches 75% and the chemical fingerprint compression rate exceeds 99.65% via filtration and digital merger compression algorithm processing. Therefore, the overall compression ratio even exceeds 99.36%. The capacity of the formed QR code is around 0.5k, which can easily and successfully be read and identified by any smartphone. P. ginseng chemical fingerprints and its DNA (ITS2) sequence code can form a QR code after data processing, and therefore the QR code can be a perfect carrier of the authenticity and quality of P. ginseng information. This study provides a theoretical basis for the development of a quality traceability system of traditional Chinese medicine based on a two-dimensional code.

  16. Testing hydrodynamics schemes in galaxy disc simulations

    NASA Astrophysics Data System (ADS)

    Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.

    2016-08-01

    We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.

  17. Simulating X-ray bursts with a radiation hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Seong, Gwangeon; Kwak, Kyujin

    2018-04-01

    Previous simulations of X-ray bursts (XRBs), for example, those performed by MESA (Modules for Experiments in Stellar Astrophysics) could not address the dynamical effects of strong radiation, which are important to explain the photospheric radius expansion (PRE) phenomena seen in many XRBs. In order to study the effects of strong radiation, we propose to use SNEC (the SuperNova Explosion Code), a 1D Lagrangian open source code that is designed to solve hydrodynamics and equilibrium-diffusion radiation transport together. Because SNEC is able to control modules of radiation-hydrodynamics for properly mapped inputs, radiation-dominant pressure occurring in PRE XRBs can be handled. Here we present simulation models for PRE XRBs by applying SNEC together with MESA.

  18. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  19. Two-dimensional implosion simulations with a kinetic particle code [2D implosion simulations with a kinetic particle code

    DOE PAGES

    Sagert, Irina; Even, Wesley Paul; Strother, Terrance Timothy

    2017-05-17

    Here, we perform two-dimensional implosion simulations using a Monte Carlo kinetic particle code. The application of a kinetic transport code is motivated, in part, by the occurrence of nonequilibrium effects in inertial confinement fusion capsule implosions, which cannot be fully captured by hydrodynamic simulations. Kinetic methods, on the other hand, are able to describe both continuum and rarefied flows. We perform simple two-dimensional disk implosion simulations using one-particle species and compare the results to simulations with the hydrodynamics code rage. The impact of the particle mean free path on the implosion is also explored. In a second study, we focusmore » on the formation of fluid instabilities from induced perturbations. We find good agreement with hydrodynamic studies regarding the location of the shock and the implosion dynamics. Differences are found in the evolution of fluid instabilities, originating from the higher resolution of rage and statistical noise in the kinetic studies.« less

  20. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  1. Hydrodynamic compression of young and adult rat osteoblast-like cells on titanium fiber mesh.

    PubMed

    Walboomers, X F; Elder, S E; Bumgardner, J D; Jansen, J A

    2006-01-01

    Living bone cells are responsive to mechanical loading. Consequently, numerous in vitro models have been developed to examine the application of loading to cells. However, not all systems are suitable for the fibrous and porous three-dimensional materials, which are preferable for tissue repair purposes, or for the production of tissue engineering scaffolds. For three-dimensional applications, mechanical loading of cells with either fluid flow systems or hydrodynamic pressure systems has to be considered. Here, we aimed to evaluate the response of osteoblast-like cells to hydrodynamic compression, while growing in a three-dimensional titanium fiber mesh scaffolding material. For this purpose, a custom hydrodynamic compression chamber was built. Bone marrow cells were obtained from the femora of young (12-day-old) or old (1-year-old) rats, and precultured in the presence of dexamethasone and beta-glycerophosphate to achieve an osteoblast-like phenotype. Subsequently, cells were seeded onto the titanium mesh scaffolds, and subjected to hydrodynamic pressure, alternating between 0.3 to 5.0 MPa at 1 Hz, at 15-min intervals for a total of 60 min per day for up to 3 days. After pressurization, cell viability was checked. Afterward, DNA levels, alkaline phosphatase (ALP) activity, and extracellular calcium content were measured. Finally, all specimens were observed with scanning electron microscopy. Cell viability studies showed that the applied pressure was not harmful to the cells. Furthermore, we found that cells were able to detect the compression forces, because we did see evident effects on the cell numbers of the cells derived from old animals. However, there were no other changes in the cells under pressure. Finally, it was also noticeable that cells from old animals did not express ALP activity, but did show similar calcified extracellular matrix formation to the cells from young animals. In conclusion, the difference in DNA levels as reaction toward pressure, and the difference in ALP levels, suggest that the osteogenic properties of bone marrow-derived osteoblast-like cells are different with respect to the age of the donor. (c) 2005 Wiley Periodicals, Inc

  2. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  3. Fluid mechanics in fluids at rest.

    PubMed

    Brenner, Howard

    2012-07-01

    Using readily available experimental thermophoretic particle-velocity data it is shown, contrary to current teachings, that for the case of compressible flows independent dye- and particle-tracer velocity measurements of the local fluid velocity at a point in a flowing fluid do not generally result in the same fluid velocity measure. Rather, tracer-velocity equality holds only for incompressible flows. For compressible fluids, each type of tracer is shown to monitor a fundamentally different fluid velocity, with (i) a dye (or any other such molecular-tagging scheme) measuring the fluid's mass velocity v appearing in the continuity equation and (ii) a small, physicochemically and thermally inert, macroscopic (i.e., non-Brownian), solid particle measuring the fluid's volume velocity v(v). The term "compressibility" as used here includes not only pressure effects on density, but also temperature effects thereon. (For example, owing to a liquid's generally nonzero isobaric coefficient of thermal expansion, nonisothermal liquid flows are to be regarded as compressible despite the general perception of liquids as being incompressible.) Recognition of the fact that two independent fluid velocities, mass- and volume-based, are formally required to model continuum fluid behavior impacts on the foundations of contemporary (monovelocity) fluid mechanics. Included therein are the Navier-Stokes-Fourier equations, which are now seen to apply only to incompressible fluids (a fact well-known, empirically, to experimental gas kineticists). The findings of a difference in tracer velocities heralds the introduction into fluid mechanics of a general bipartite theory of fluid mechanics, bivelocity hydrodynamics [Brenner, Int. J. Eng. Sci. 54, 67 (2012)], differing from conventional hydrodynamics in situations entailing compressible flows and reducing to conventional hydrodynamics when the flow is incompressible, while being applicable to both liquids and gases.

  4. Modeling Laboratory Astrophysics Experiments in the High-Energy-Density Regime Using the CRASH Radiation-Hydrodynamics Model

    NASA Astrophysics Data System (ADS)

    Grosskopf, M. J.; Drake, R. P.; Trantham, M. R.; Kuranz, C. C.; Keiter, P. A.; Rutter, E. M.; Sweeney, R. M.; Malamud, G.

    2012-10-01

    The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density physics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. CRASH model results have shown good agreement with a experimental results from a variety of applications, including: radiative shock, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL), collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  5. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  6. Modelling the effect of shear strength on isentropic compression experiments

    NASA Astrophysics Data System (ADS)

    Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary

    2017-01-01

    Isentropic compression experiments (ICE) are a way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 - 102 GPa, while the yield strength of the material can be as low as 10-2 GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength compared with a model based purely on hydrodynamics.

  7. Two-dimensional simulations of thermonuclear burn in ignition-scale inertial confinement fusion targets under compressed axial magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, L. J.; Logan, B. G.; Zimmerman, G. B.

    2013-07-15

    We report for the first time on full 2-D radiation-hydrodynamic implosion simulations that explore the impact of highly compressed imposed magnetic fields on the ignition and burn of perturbed spherical implosions of ignition-scale cryogenic capsules. Using perturbations that highly convolute the cold fuel boundary of the hotspot and prevent ignition without applied fields, we impose initial axial seed fields of 20–100 T (potentially attainable using present experimental methods) that compress to greater than 4 × 10{sup 4} T (400 MG) under implosion, thereby relaxing hotspot areal densities and pressures required for ignition and propagating burn by ∼50%. The compressed fieldmore » is high enough to suppress transverse electron heat conduction, and to allow alphas to couple energy into the hotspot even when highly deformed by large low-mode amplitudes. This might permit the recovery of ignition, or at least significant alpha particle heating, in submarginal capsules that would otherwise fail because of adverse hydrodynamic instabilities.« less

  8. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  9. Data compression for satellite images

    NASA Technical Reports Server (NTRS)

    Chen, P. H.; Wintz, P. A.

    1976-01-01

    An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.

  10. Radiation hydrodynamics of triggered star formation: the effect of the diffuse radiation field

    NASA Astrophysics Data System (ADS)

    Haworth, Thomas J.; Harries, Tim J.

    2012-02-01

    We investigate the effect of including diffuse field radiation when modelling the radiatively driven implosion of a Bonnor-Ebert sphere (BES). Radiation-hydrodynamical calculations are performed by using operator splitting to combine Monte Carlo photoionization with grid-based Eulerian hydrodynamics that includes self-gravity. It is found that the diffuse field has a significant effect on the nature of radiatively driven collapse which is strongly coupled to the strength of the driving shock that is established before impacting the BES. This can result in either slower or more rapid star formation than expected using the on-the-spot approximation depending on the distance of the BES from the source object. As well as directly compressing the BES, stronger shocks increase the thickness and density in the shell of accumulated material, which leads to short, strong, photoevaporative ejections that reinforce the compression whenever it slows. This happens particularly effectively when the diffuse field is included as rocket motion is induced over a larger area of the shell surface. The formation and evolution of 'elephant trunks' via instability is also found to vary significantly when the diffuse field is included. Since the perturbations that seed instabilities are smeared out elephant trunks form less readily and, once formed, are exposed to enhanced thermal compression.

  11. Toward a Multi-scale Phase Transition Kinetics Methodology: From Non-Equilibrium Statistical Mechanics to Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Belof, Jonathan; Orlikowski, Daniel; Wu, Christine; McLaughlin, Keith

    2013-06-01

    Shock and ramp compression experiments are allowing us to probe condensed matter under extreme conditions where phase transitions and other non-equilibrium aspects can now be directly observed, but first principles simulation of kinetics remains a challenge. A multi-scale approach is presented here, with non-equilibrium statistical mechanical quantities calculated by molecular dynamics (MD) and then leveraged to inform a classical nucleation and growth kinetics model at the hydrodynamic scale. Of central interest is the free energy barrier for the formation of a critical nucleus, with direct NEMD presenting the challenge of relatively long timescales necessary to resolve nucleation. Rather than attempt to resolve the time-dependent nucleation sequence directly, the methodology derived here is built upon the non-equilibrium work theorem in order to bias the formation of a critical nucleus and thus construct the nucleation and growth rates. Having determined these kinetic terms from MD, a hydrodynamics implementation of Kolmogorov-Johnson-Mehl-Avrami (KJMA) kinetics and metastabilty is applied to the dynamic compressive freezing of water and compared with recent ramp compression experiments [Dolan et al., Nature (2007)] Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.

  12. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  13. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  14. SPHYNX: an accurate density-based SPH method for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Cabezón, R. M.; García-Senz, D.; Figueira, J.

    2017-10-01

    Aims: Hydrodynamical instabilities and shocks are ubiquitous in astrophysical scenarios. Therefore, an accurate numerical simulation of these phenomena is mandatory to correctly model and understand many astrophysical events, such as supernovas, stellar collisions, or planetary formation. In this work, we attempt to address many of the problems that a commonly used technique, smoothed particle hydrodynamics (SPH), has when dealing with subsonic hydrodynamical instabilities or shocks. To that aim we built a new SPH code named SPHYNX, that includes many of the recent advances in the SPH technique and some other new ones, which we present here. Methods: SPHYNX is of Newtonian type and grounded in the Euler-Lagrange formulation of the smoothed-particle hydrodynamics technique. Its distinctive features are: the use of an integral approach to estimating the gradients; the use of a flexible family of interpolators called sinc kernels, which suppress pairing instability; and the incorporation of a new type of volume element which provides a better partition of the unity. Unlike other modern formulations, which consider volume elements linked to pressure, our volume element choice relies on density. SPHYNX is, therefore, a density-based SPH code. Results: A novel computational hydrodynamic code oriented to Astrophysical applications is described, discussed, and validated in the following pages. The ensuing code conserves mass, linear and angular momentum, energy, entropy, and preserves kernel normalization even in strong shocks. In our proposal, the estimation of gradients is enhanced using an integral approach. Additionally, we introduce a new family of volume elements which reduce the so-called tensile instability. Both features help to suppress the damp which often prevents the growth of hydrodynamic instabilities in regular SPH codes. Conclusions: On the whole, SPHYNX has passed the verification tests described below. For identical particle setting and initial conditions the results were similar (or better in some particular cases) than those obtained with other SPH schemes such as GADGET-2, PSPH or with the recent density-independent formulation (DISPH) and conservative reproducing kernel (CRKSPH) techniques.

  15. The mathematical theory of signal processing and compression-designs

    NASA Astrophysics Data System (ADS)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  16. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  17. Some Practical Universal Noiseless Coding Techniques

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1994-01-01

    Report discusses noiseless data-compression-coding algorithms, performance characteristics and practical consideration in implementation of algorithms in coding modules composed of very-large-scale integrated circuits. Report also has value as tutorial document on data-compression-coding concepts. Coding techniques and concepts in question "universal" in sense that, in principle, applicable to streams of data from variety of sources. However, discussion oriented toward compression of high-rate data generated by spaceborne sensors for lower-rate transmission back to earth.

  18. Convective penetration in stars

    NASA Astrophysics Data System (ADS)

    Pratt, Jane; Baraffe, Isabelle; Goffrey, Tom; Constantino, Tom; Popov, M. V.; Walder, Rolf; Folini, Doris; TOFU Collaboration

    To interpret the high-quality data produced from recent space-missions it is necessary to study convection under realistic stellar conditions. We describe the multi-dimensional, time implicit, fully compressible, hydrodynamic, implicit large eddy simulation code MUSIC, currently being developed at the University of Exeter. We use MUSIC to study convection during an early stage in the evolution of our sun where the convection zone covers approximately half of the solar radius. This model of the young sun possesses a realistic stratification in density, temperature, and luminosity. We approach convection in a stellar context using extreme value theory and derive a new model for convective penetration, targeted for one-dimensional stellar evolution calculations. The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework (FP7/2007-2013)/ERC Grant agreement no. 320478.

  19. Wavelet-based audio embedding and audio/video compression

    NASA Astrophysics Data System (ADS)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  20. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. Wemore » have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.« less

  1. Non-linear hydrodynamical evolution of rotating relativistic stars: numerical methods and code tests

    NASA Astrophysics Data System (ADS)

    Font, José A.; Stergioulas, Nikolaos; Kokkotas, Kostas D.

    2000-04-01

    We present numerical hydrodynamical evolutions of rapidly rotating relativistic stars, using an axisymmetric, non-linear relativistic hydrodynamics code. We use four different high-resolution shock-capturing (HRSC) finite-difference schemes (based on approximate Riemann solvers) and compare their accuracy in preserving uniformly rotating stationary initial configurations in long-term evolutions. Among these four schemes, we find that the third-order piecewise parabolic method scheme is superior in maintaining the initial rotation law in long-term evolutions, especially near the surface of the star. It is further shown that HRSC schemes are suitable for the evolution of perturbed neutron stars and for the accurate identification (via Fourier transforms) of normal modes of oscillation. This is demonstrated for radial and quadrupolar pulsations in the non-rotating limit, where we find good agreement with frequencies obtained with a linear perturbation code. The code can be used for studying small-amplitude or non-linear pulsations of differentially rotating neutron stars, while our present results serve as testbed computations for three-dimensional general-relativistic evolution codes.

  2. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  3. Collisionless stellar hydrodynamics as an efficient alternative to N-body methods

    NASA Astrophysics Data System (ADS)

    Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard

    2013-01-01

    The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.

  4. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    NASA Astrophysics Data System (ADS)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2016-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  5. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    NASA Astrophysics Data System (ADS)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2017-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  6. Application discussion of source coding standard in voyage data recorder

    NASA Astrophysics Data System (ADS)

    Zong, Yonggang; Zhao, Xiandong

    2018-04-01

    This paper analyzes the disadvantages of the audio and video compression coding technology used by Voyage Data Recorder, and combines the improvement of performance of audio and video acquisition equipment. The thinking of improving the audio and video compression coding technology of the voyage data recorder is proposed, and the feasibility of adopting the new compression coding technology is analyzed from economy and technology two aspects.

  7. Predictive Capability of the Compressible MRG Equation for an Explosively Driven Particle with Validation

    NASA Astrophysics Data System (ADS)

    Garno, Joshua; Ouellet, Frederick; Koneru, Rahul; Balachandar, Sivaramakrishnan; Rollin, Bertrand

    2017-11-01

    An analytic model to describe the hydrodynamic forces on an explosively driven particle is not currently available. The Maxey-Riley-Gatignol (MRG) particle force equation generalized for compressible flows is well-studied in shock-tube applications, and captures the evolution of particle force extracted from controlled shock-tube experiments. In these experiments only the shock-particle interaction was examined, and the effects of the contact line were not investigated. In the present work, the predictive capability of this model is considered for the case where a particle is explosively ejected from a rigid barrel into ambient air. Particle trajectory information extracted from simulations is compared with experimental data. This configuration ensures that both the shock and contact produced by the detonation will influence the motion of the particle. The simulations are carried out using a finite volume, Euler-Lagrange code using the JWL equation of state to handle the explosive products. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program,under Contract No. DE-NA0002378.

  8. The role of viscosity in TATB hot spot ignition

    NASA Astrophysics Data System (ADS)

    Fried, Laurence E.; Zepeda-Ruis, Luis; Howard, W. Michael; Najjar, Fady; Reaugh, John E.

    2012-03-01

    The role of dissipative effects, such as viscosity, in the ignition of high explosive pores is investigated using a coupled chemical, thermal, and hydrodynamic model. Chemical reactions are tracked with the Cheetah thermochemical code coupled to the ALE3D hydrodynamic code. We perform molecular dynamics simulations to determine the viscosity of liquid TATB. We also analyze shock wave experiments to obtain an estimate for the shock viscosity of TATB. Using the lower bound liquid-like viscosities, we find that the pore collapse is hydrodynamic in nature. Using the upper bound viscosity from shock wave experiments, we find that the pore collapse is closest to the viscous limit.

  9. The moving mesh code SHADOWFAX

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, B.; De Rijcke, S.

    2016-07-01

    We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.

  10. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  11. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  12. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    NASA Astrophysics Data System (ADS)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  13. Comparison of Hydrodynamic Load Predictions Between Engineering Models and Computational Fluid Dynamics for the OC4-DeepCwind Semi-Submersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Hydrodynamic loads on the platforms of floating offshore wind turbines are often predicted with computer-aided engineering tools that employ Morison's equation and/or potential-flow theory. This work compares results from one such tool, FAST, NREL's wind turbine computer-aided engineering tool, and the computational fluid dynamics package, OpenFOAM, for the OC4-DeepCwind semi-submersible analyzed in the International Energy Agency Wind Task 30 project. Load predictions from HydroDyn, the offshore hydrodynamics module of FAST, are compared with high-fidelity results from OpenFOAM. HydroDyn uses a combination of Morison's equations and potential flow to predict the hydrodynamic forces on the structure. The implications of the assumptionsmore » in HydroDyn are evaluated based on this code-to-code comparison.« less

  14. Bit-Wise Arithmetic Coding For Compression Of Data

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    1996-01-01

    Bit-wise arithmetic coding is data-compression scheme intended especially for use with uniformly quantized data from source with Gaussian, Laplacian, or similar probability distribution function. Code words of fixed length, and bits treated as being independent. Scheme serves as means of progressive transmission or of overcoming buffer-overflow or rate constraint limitations sometimes arising when data compression used.

  15. Telemetry advances in data compression and channel coding

    NASA Technical Reports Server (NTRS)

    Miller, Warner H.; Morakis, James C.; Yeh, Pen-Shu

    1990-01-01

    Addressed in this paper is the dependence of telecommunication channel, forward error correcting coding and source data compression coding on integrated circuit technology. Emphasis is placed on real time high speed Reed Solomon (RS) decoding using full custom VLSI technology. Performance curves of NASA's standard channel coder and a proposed standard lossless data compression coder are presented.

  16. Three-Dimensional Simulations of the Convective Urca Process in Pre-Supernova White Dwarfs

    NASA Astrophysics Data System (ADS)

    Willcox, Donald E.; Townsley, Dean; Zingale, Michael; Calder, Alan

    2017-01-01

    A significant source of uncertainty in modeling the progenitor systems of Type Ia supernovae is the dynamics of the convective Urca process in which beta decay and electron capture reactions remove energy from and decrease the buoyancy of carbon-fueled convection in the progenitor white dwarf. The details of the Urca process during this simmering phase have long remained computationally intractable in three-dimensional simulations because of the very low convective velocities and the associated timestep constraints of compressible hydrodynamics methods. We report on recent work simulating the A=23 (Ne/Na) Urca process in convecting white dwarfs in three dimensions using the low-Mach hydrodynamics code MAESTRO. We simulate white dwarf models inspired by one-dimensional stellar evolution calculations at the stage when the outer edge of the convection zone driven by core carbon burning reaches the A=23 Urca shell. We compare our methods and results to those of previous work in one and two dimensions, discussing the implications of three dimensional turbulence. We also comment on the prospect of our results informing one-dimensional stellar evolution calculations and the Type Ia supernovae progenitor problem.This work was supported in part by the Department of Energy under grant DE-FG02-87ER40317.

  17. Physical Intrepretation of Mathematically Invariant K(r,P) Type Equations of State for Hydrodynamically Driven Flow

    NASA Astrophysics Data System (ADS)

    Hrbek, George

    2001-06-01

    At SCCM Shock 99, Lie Group Theory was applied to the problem of temperature independent, hydrodynamic shock in a Birch-Murnaghan continuum. (1) Ratios of the group parameters were shown to be linked to the physical parameters specified in the second, third, and fourth order BM-EOS approximations. This effort has subsequently been extended to provide a general formalism for a wide class of mathematical forms (i.e., K(r,P)) of the equation of state. Variations in material expansion and resistance (i.e., counter pressure) are shown to be functions of compression and material variation ahead of the expanding front. Specific examples included the Birch-Murnaghan, Vinet, Brennan-Stacey, Shanker, Tait, Poirier, and Jones-Wilkins-Lee (JWL) forms. (2) With these ratios defined, the next step is to predict the behavior of these K(r,P) type solids. To do this, one must introduce the group ratios into a numerical simulation for the flow and generate the density, pressure, and particle velocity profiles as the shock moves through the material. This will allow the various equations of state, and their respective fitting coefficients, to be compared with experiments, and additionally, allow the empirical coefficients for these EOS forms to be adjusted accordingly. (1) Hrbek, G. M., Invariant Functional Forms For The Second, Third, And Fourth Order Birch-Murnaghan Equation of State For Materials Subject to Hydrodynamic Shock, Proceedings of the 11th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 99), Snowbird, Utah (2) Hrbek, G. M., Invariant Functional Forms For K(r,P) Type Equations Of State For Hydrodynamically Driven Flows, Submitted to the 12th American Physical Society Topical Group Meeting on Shock Compression of Condensed Matter (SCCM Shock 01), Atlanta, Georgia

  18. A new relativistic viscous hydrodynamics code and its application to the Kelvin-Helmholtz instability in high-energy heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-01

    We construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. We check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken's flow and the Israel-Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin-Helmholtz instability in high-energy heavy-ion collisions.

  19. Detection of the Compressed Primary Stellar Wind in eta Carinae

    NASA Technical Reports Server (NTRS)

    Teodoro, Mairan Macedo; Madura, Thomas I.; Gull, Theodore R.; Corcoran, Michael F.; Hamaguchi, K.

    2014-01-01

    A series of three HST/STIS spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from eta Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  20. The effect of shear strength on isentropic compression experiments

    NASA Astrophysics Data System (ADS)

    Thomson, Stuart; Howell, Peter; Ockendon, John; Ockendon, Hilary

    2015-06-01

    Isentropic compression experiments (ICE) are a novel way of obtaining equation of state information for metals undergoing violent plastic deformation. In a typical experiment, millimetre thick metal samples are subjected to pressures on the order of 10 -102 GPa, while the yield strength of the material can be as low as 10-1GPa. The analysis of such experiments has so far neglected the effect of shear strength, instead treating the highly plasticised metal as an inviscid compressible fluid. However making this approximation belies the basic elastic nature of a solid object. A more accurate method should strive to incorporate the small but measurable effects of shear strength. Here we present a one-dimensional mathematical model for elastoplasticity at high stress which allows for both compressibility and the shear strength of the material. In the limit of zero yield stress this model reproduces the hydrodynamic models currently used to analyse ICEs. We will also show using a systematic asymptotic analysis that entropy changes are universally negligible in the absence of shocks. Numerical solutions of the governing equations will then be presented for problems relevant to ICEs in order to investigate the effects of shear strength over a model based purely on hydrodynamics.

  1. Wavelet-based compression of pathological images for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  2. The Role of Viscosity in TATB Hot Spot Ignition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fried, L E; Zepeda-Ruis, L; Howard, W M

    2011-08-02

    The role of dissipative effects, such as viscosity, in the ignition of high explosive pores is investigated using a coupled chemical, thermal, and hydrodynamic model. Chemical reactions are tracked with the Cheetah thermochemical code coupled to the ALE3D hydrodynamic code. We perform molecular dynamics simulations to determine the viscosity of liquid TATB. We also analyze shock wave experiments to obtain an estimate for the shock viscosity of TATB. Using the lower bound liquid-like viscosities, we find that the pore collapse is hydrodynamic in nature. Using the upper bound viscosity from shock wave experiments, we find that the pore collapse ismore » closest to the viscous limit.« less

  3. Preliminary design of turbopumps and related machinery

    NASA Technical Reports Server (NTRS)

    Wislicenus, George F.

    1986-01-01

    Pumps used in large liquid-fuel rocket engines are examined. The term preliminary design denotes the initial, creative phases of design, where the general shape and characteristics of the machine are determined. This compendium is intended to provide the design engineer responsible for these initial phases with a physical understanding and background knowledge of the numerous special fields involved in the design process. Primary attention is directed to the pumping part of the turbopump and hence is concerned with essentially incompressible fluids. However, compressible flow principles are developed. As much as possible, the simplicity and reliability of incompressible flow considerations are retained by treating the mechanics of compressible fluids as a departure from the theory of incompressible fluids. Five areas are discussed: a survey of the field of turbomachinery in dimensionless form; the theoretical principles of the hydrodynamic design of turbomachinery; the hydrodynamic and gas dynamic design of axial flow turbomachinery; the hydrodynamic and gas dynamic design of radial and mixed flow turbomachinery; and some mechanical design considerations of turbomachinery. Theoretical considerations are presented with a relatively elementary mathematical treatment.

  4. Testing a one-dimensional prescription of dynamical shear mixing with a two-dimensional hydrodynamic simulation

    NASA Astrophysics Data System (ADS)

    Edelmann, P. V. F.; Röpke, F. K.; Hirschi, R.; Georgy, C.; Jones, S.

    2017-07-01

    Context. The treatment of mixing processes is still one of the major uncertainties in 1D stellar evolution models. This is mostly due to the need to parametrize and approximate aspects of hydrodynamics in hydrostatic codes. In particular, the effect of hydrodynamic instabilities in rotating stars, for example, dynamical shear instability, evades consistent description. Aims: We intend to study the accuracy of the diffusion approximation to dynamical shear in hydrostatic stellar evolution models by comparing 1D models to a first-principle hydrodynamics simulation starting from the same initial conditions. Methods: We chose an initial model calculated with the stellar evolution code GENEC that is just at the onset of a dynamical shear instability but does not show any other instabilities (e.g., convection). This was mapped to the hydrodynamics code SLH to perform a 2D simulation in the equatorial plane. We compare the resulting profiles in the two codes and compute an effective diffusion coefficient for the hydro simulation. Results: Shear instabilities develop in the 2D simulation in the regions predicted by linear theory to become unstable in the 1D stellar evolution model. Angular velocity and chemical composition is redistributed in the unstable region, thereby creating new unstable regions. After a period of time, the system settles in a symmetric, steady state, which is Richardson stable everywhere in the 2D simulation, whereas the instability remains for longer in the 1D model due to the limitations of the current implementation in the 1D code. A spatially resolved diffusion coefficient is extracted by comparing the initial and final profiles of mean atomic mass. Conclusions: The presented simulation gives a first insight on hydrodynamics of shear instabilities in a real stellar environment and even allows us to directly extract an effective diffusion coefficient. We see evidence for a critical Richardson number of 0.25 as regions above this threshold remain stable for the course of the simulation. The movie of the simulation is available at http://www.aanda.org

  5. Molecular Dynamics implementation of BN2D or 'Mercedes Benz' water model

    NASA Astrophysics Data System (ADS)

    Scukins, Arturs; Bardik, Vitaliy; Pavlov, Evgen; Nerukh, Dmitry

    2015-05-01

    Two-dimensional 'Mercedes Benz' (MB) or BN2D water model (Naim, 1971) is implemented in Molecular Dynamics. It is known that the MB model can capture abnormal properties of real water (high heat capacity, minima of pressure and isothermal compressibility, negative thermal expansion coefficient) (Silverstein et al., 1998). In this work formulas for calculating the thermodynamic, structural and dynamic properties in microcanonical (NVE) and isothermal-isobaric (NPT) ensembles for the model from Molecular Dynamics simulation are derived and verified against known Monte Carlo results. The convergence of the thermodynamic properties and the system's numerical stability are investigated. The results qualitatively reproduce the peculiarities of real water making the model a visually convenient tool that also requires less computational resources, thus allowing simulations of large (hydrodynamic scale) molecular systems. We provide the open source code written in C/C++ for the BN2D water model implementation using Molecular Dynamics.

  6. Spatiotemporal and spectral characteristics of X-ray radiation emitted by the Z-pinch during the current implosion of quasispherical multiwire arrays

    NASA Astrophysics Data System (ADS)

    Gritsuk, A. N.

    2017-12-01

    For the first time, a quasi-spherical current implosion has been experimentally realized on a multimegaampere facility with the peak current of up to 4 MA and a soft X-ray source has been created with high radiation power density on its surface of up to 3 TW/cm2. An increase in the energy density at the centre of the source of soft X-ray radiation (SXR) was experimentally observed upon compression of quasi-spherical arrays with the linear-mass profiling. In this case, the average power density on the surface of the SXR source is three times higher than for implosions of cylindrical arrays of the same mass and close values of the discharge current. Obtained experimental data are compared with the results of modelling the current implosion of multi-wire arrays performed with the help of a three-dimensional radiation-magneto-hydrodynamic code.

  7. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lomov, I; Pember, R; Greenough, J

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less

  8. Kinetic Equation for a Soliton Gas and Its Hydrodynamic Reductions

    NASA Astrophysics Data System (ADS)

    El, G. A.; Kamchatnov, A. M.; Pavlov, M. V.; Zykov, S. A.

    2011-04-01

    We introduce and study a new class of kinetic equations, which arise in the description of nonequilibrium macroscopic dynamics of soliton gases with elastic collisions between solitons. These equations represent nonlinear integro-differential systems and have a novel structure, which we investigate by studying in detail the class of N-component `cold-gas' hydrodynamic reductions. We prove that these reductions represent integrable linearly degenerate hydrodynamic type systems for arbitrary N which is a strong evidence in favour of integrability of the full kinetic equation. We derive compact explicit representations for the Riemann invariants and characteristic velocities of the hydrodynamic reductions in terms of the `cold-gas' component densities and construct a number of exact solutions having special properties (quasiperiodic, self-similar). Hydrodynamic symmetries are then derived and investigated. The obtained results shed light on the structure of a continuum limit for a large class of integrable systems of hydrodynamic type and are also relevant to the description of turbulent motion in conservative compressible flows.

  9. Skew resisting hydrodynamic seal

    DOEpatents

    Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.

    2001-01-01

    A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.

  10. Verification of the Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia

    DTIC Science & Technology

    1989-07-01

    TECHNICAL REPORT HL-89-14 VERIFICATION OF THE HYDRODYNAMIC AND Si SEDIMENT TRANSPORT HYBRID MODELING SYSTEM FOR CUMBERLAND SOUND AND I’) KINGS BAY...Hydrodynamic and Sediment Transport Hybrid Modeling System for Cumberland Sound and Kings Bay Navigation Channel, Georgia 12 PERSONAL AUTHOR(S) Granat...Hydrodynamic results from RMA-2V were used in the numerical sediment transport code STUDH in modeling the interaction of the flow transport and

  11. Detection of the Compressed Primary Stellar Wind in eta Carinae*

    NASA Technical Reports Server (NTRS)

    Teodoro, M.; Madura, T. I.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.

    2013-01-01

    A series of three Hubble Space Telescope Space Telescope Imaging Spectrograph (HST/STIS) spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from ? Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.

  12. Parallel processing a three-dimensional free-lagrange code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1989-01-01

    A three-dimensional, time-dependent free-Lagrange hydrodynamics code has been multitasked and autotasked on a CRAY X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the CRAY multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The three-dimensional algorithm has presented a number of problems that simpler algorithms, such as those for one-dimensional hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a CRAY-1, to a multitasking code are discussed. Autotasking of a rewritten versionmore » of the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given.« less

  13. Parallel processing a real code: A case history

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Trease, H.E.

    1988-01-01

    A three-dimensional, time-dependent Free-Lagrange hydrodynamics code has been multitasked and autotasked on a Cray X-MP/416. The multitasking was done by using the Los Alamos Multitasking Control Library, which is a superset of the Cray multitasking library. Autotasking is done by using constructs which are only comment cards if the source code is not run through a preprocessor. The 3-D algorithm has presented a number of problems that simpler algorithms, such as 1-D hydrodynamics, did not exhibit. Problems in converting the serial code, originally written for a Cray 1, to a multitasking code are discussed, Autotasking of a rewritten version ofmore » the code is discussed. Timing results for subroutines and hot spots in the serial code are presented and suggestions for additional tools and debugging aids are given. Theoretical speedup results obtained from Amdahl's law and actual speedup results obtained on a dedicated machine are presented. Suggestions for designing large parallel codes are given. 8 refs., 13 figs.« less

  14. GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids

    NASA Astrophysics Data System (ADS)

    Hubber, D. A.; Rosotti, G. P.; Booth, R. A.

    2018-01-01

    GANDALF is a new hydrodynamics and N-body dynamics code designed for investigating planet formation, star formation and star cluster problems. GANDALF is written in C++, parallelized with both OPENMP and MPI and contains a PYTHON library for analysis and visualization. The code has been written with a fully object-oriented approach to easily allow user-defined implementations of physics modules or other algorithms. The code currently contains implementations of smoothed particle hydrodynamics, meshless finite-volume and collisional N-body schemes, but can easily be adapted to include additional particle schemes. We present in this paper the details of its implementation, results from the test suite, serial and parallel performance results and discuss the planned future development. The code is freely available as an open source project on the code-hosting website github at https://github.com/gandalfcode/gandalf and is available under the GPLv2 license.

  15. High-fidelity plasma codes for burn physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooley, James; Graziani, Frank; Marinak, Marty

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental datamore » and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.« less

  16. WEC3: Wave Energy Converter Code Comparison Project: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combourieu, Adrien; Lawson, Michael; Babarit, Aurelien

    This paper describes the recently launched Wave Energy Converter Code Comparison (WEC3) project and present preliminary results from this effort. The objectives of WEC3 are to verify and validate numerical modelling tools that have been developed specifically to simulate wave energy conversion devices and to inform the upcoming IEA OES Annex VI Ocean Energy Modelling Verification and Validation project. WEC3 is divided into two phases. Phase 1 consists of a code-to-code verification and Phase II entails code-to-experiment validation. WEC3 focuses on mid-fidelity codes that simulate WECs using time-domain multibody dynamics methods to model device motions and hydrodynamic coefficients to modelmore » hydrodynamic forces. Consequently, high-fidelity numerical modelling tools, such as Navier-Stokes computational fluid dynamics simulation, and simple frequency domain modelling tools were not included in the WEC3 project.« less

  17. A new relativistic viscous hydrodynamics code and its application to the Kelvin–Helmholtz instability in high-energy heavy-ion collisions

    DOE PAGES

    Okamoto, Kazuhisa; Nonaka, Chiho

    2017-06-09

    Here, we construct a new relativistic viscous hydrodynamics code optimized in the Milne coordinates. We also split the conservation equations into an ideal part and a viscous part, using the Strang spitting method. In the code a Riemann solver based on the two-shock approximation is utilized for the ideal part and the Piecewise Exact Solution (PES) method is applied for the viscous part. Furthemore, we check the validity of our numerical calculations by comparing analytical solutions, the viscous Bjorken’s flow and the Israel–Stewart theory in Gubser flow regime. Using the code, we discuss possible development of the Kelvin–Helmholtz instability inmore » high-energy heavy-ion collisions.« less

  18. Yield degradation in inertial-confinement-fusion implosions due to shock-driven kinetic fuel-species stratification and viscous heating

    DOE PAGES

    Taitano, William T.; Simakov, Andrei N.; Chacon, Luis; ...

    2018-04-09

    Anomalous thermonuclear yield degradation (i.e., that not describable by single-fluid radiation hydrodynamics) in Inertial Confinement Fusion (ICF) implosions is ubiquitously observed in both Omega and National Ignition experiments. Multiple experimental and theoretical studies have been carried out to investigate the origin of such a degradation. Relative concentration changes of fuel-ion species, as well as kinetically enhanced viscous heating, have been among possible explanations proposed for certain classes of ICF experiments. In this study, we investigate the role of such kinetic plasma effects in detail. To this end, we use the iFP code to perform multi-species ion Vlasov-Fokker-Planck simulations of ICFmore » capsule implosions with the fuel comprising various hydrodynamically equivalent mixtures of deuterium (D) and helium-3 (3He), as in the original. We employ the same computational setup as in O. Larroche, which was the first to simulate the experiments kinetically. However, unlike the Larroche study, and in partial agreement with experimental data, we find a systematic yield degradation in multi-species simulations versus averaged-ion simulations when the D-fuel fraction is decreased. This yield degradation originates in the fuel-ion species stratification induced by plasma shocks, which imprints the imploding system and results in the relocation of the D ions from the core of the capsule to its periphery, thereby reducing the yield relative to a non-separable averaged-ion case. By comparing yields from the averaged-ion kinetic simulations and from the hydrodynamic scaling, we also observe yield variations associated with ion kinetic effects other than fuel-ion stratification, such as ion viscous heating, which is typically neglected in hydrodynamic implosions' simulations. Since our kinetic simulations are driven by hydrodynamic boundary conditions at the fuel-ablator interface, they cannot capture the effects of ion viscosity on the capsule compression, or effects associated with the interface, which are expected to be important. As a result, studies of such effects are left for future work.« less

  19. Yield degradation in inertial-confinement-fusion implosions due to shock-driven kinetic fuel-species stratification and viscous heating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taitano, William T.; Simakov, Andrei N.; Chacon, Luis

    Anomalous thermonuclear yield degradation (i.e., that not describable by single-fluid radiation hydrodynamics) in Inertial Confinement Fusion (ICF) implosions is ubiquitously observed in both Omega and National Ignition experiments. Multiple experimental and theoretical studies have been carried out to investigate the origin of such a degradation. Relative concentration changes of fuel-ion species, as well as kinetically enhanced viscous heating, have been among possible explanations proposed for certain classes of ICF experiments. In this study, we investigate the role of such kinetic plasma effects in detail. To this end, we use the iFP code to perform multi-species ion Vlasov-Fokker-Planck simulations of ICFmore » capsule implosions with the fuel comprising various hydrodynamically equivalent mixtures of deuterium (D) and helium-3 (3He), as in the original. We employ the same computational setup as in O. Larroche, which was the first to simulate the experiments kinetically. However, unlike the Larroche study, and in partial agreement with experimental data, we find a systematic yield degradation in multi-species simulations versus averaged-ion simulations when the D-fuel fraction is decreased. This yield degradation originates in the fuel-ion species stratification induced by plasma shocks, which imprints the imploding system and results in the relocation of the D ions from the core of the capsule to its periphery, thereby reducing the yield relative to a non-separable averaged-ion case. By comparing yields from the averaged-ion kinetic simulations and from the hydrodynamic scaling, we also observe yield variations associated with ion kinetic effects other than fuel-ion stratification, such as ion viscous heating, which is typically neglected in hydrodynamic implosions' simulations. Since our kinetic simulations are driven by hydrodynamic boundary conditions at the fuel-ablator interface, they cannot capture the effects of ion viscosity on the capsule compression, or effects associated with the interface, which are expected to be important. As a result, studies of such effects are left for future work.« less

  20. Yield degradation in inertial-confinement-fusion implosions due to shock-driven kinetic fuel-species stratification and viscous heating

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Simakov, A. N.; Chacón, L.; Keenan, B.

    2018-05-01

    Anomalous thermonuclear yield degradation (i.e., that not describable by single-fluid radiation hydrodynamics) in Inertial Confinement Fusion (ICF) implosions is ubiquitously observed in both Omega and National Ignition experiments. Multiple experimental and theoretical studies have been carried out to investigate the origin of such a degradation. Relative concentration changes of fuel-ion species, as well as kinetically enhanced viscous heating, have been among possible explanations proposed for certain classes of ICF experiments. In this study, we investigate the role of such kinetic plasma effects in detail. To this end, we use the iFP code to perform multi-species ion Vlasov-Fokker-Planck simulations of ICF capsule implosions with the fuel comprising various hydrodynamically equivalent mixtures of deuterium (D) and helium-3 (3He), as in the original Rygg experiments [J. R. Rygg et al., Phys. Plasmas 13, 052702 (2006)]. We employ the same computational setup as in O. Larroche [Phys. Plasmas 19, 122706 (2012)], which was the first to simulate the experiments kinetically. However, unlike the Larroche study, and in partial agreement with experimental data, we find a systematic yield degradation in multi-species simulations versus averaged-ion simulations when the D-fuel fraction is decreased. This yield degradation originates in the fuel-ion species stratification induced by plasma shocks, which imprints the imploding system and results in the relocation of the D ions from the core of the capsule to its periphery, thereby reducing the yield relative to a non-separable averaged-ion case. By comparing yields from the averaged-ion kinetic simulations and from the hydrodynamic scaling, we also observe yield variations associated with ion kinetic effects other than fuel-ion stratification, such as ion viscous heating, which is typically neglected in hydrodynamic implosions' simulations. Since our kinetic simulations are driven by hydrodynamic boundary conditions at the fuel-ablator interface, they cannot capture the effects of ion viscosity on the capsule compression, or effects associated with the interface, which are expected to be important. Studies of such effects are left for future work.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlanderer, Stefan C., E-mail: stefan.schlanderer@unimelb.edu.au; Weymouth, Gabriel D., E-mail: G.D.Weymouth@soton.ac.uk; Sandberg, Richard D., E-mail: richard.sandberg@unimelb.edu.au

    This paper introduces a virtual boundary method for compressible viscous fluid flow that is capable of accurately representing moving bodies in flow and aeroacoustic simulations. The method is the compressible extension of the boundary data immersion method (BDIM, Maertens & Weymouth (2015), ). The BDIM equations for the compressible Navier–Stokes equations are derived and the accuracy of the method for the hydrodynamic representation of solid bodies is demonstrated with challenging test cases, including a fully turbulent boundary layer flow and a supersonic instability wave. In addition we show that the compressible BDIM is able to accurately represent noise radiation frommore » moving bodies and flow induced noise generation without any penalty in allowable time step.« less

  2. Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    1999-01-01

    Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.

  3. BALANCING THE LOAD: A VORONOI BASED SCHEME FOR PARALLEL COMPUTATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, Elad; Yalinewich, Almog; Sari, Re'em

    2015-01-01

    One of the key issues when running a simulation on multiple CPUs is maintaining a proper load balance throughout the run and minimizing communications between CPUs. We propose a novel method of utilizing a Voronoi diagram to achieve a nearly perfect load balance without the need of any global redistributions of data. As a show case, we implement our method in RICH, a two-dimensional moving mesh hydrodynamical code, but it can be extended trivially to other codes in two or three dimensions. Our tests show that this method is indeed efficient and can be used in a large variety ofmore » existing hydrodynamical codes.« less

  4. Visually lossless compression of digital hologram sequences

    NASA Astrophysics Data System (ADS)

    Darakis, Emmanouil; Kowiel, Marcin; Näsänen, Risto; Naughton, Thomas J.

    2010-01-01

    Digital hologram sequences have great potential for the recording of 3D scenes of moving macroscopic objects as their numerical reconstruction can yield a range of perspective views of the scene. Digital holograms inherently have large information content and lossless coding of holographic data is rather inefficient due to the speckled nature of the interference fringes they contain. Lossy coding of still holograms and hologram sequences has shown promising results. By definition, lossy compression introduces errors in the reconstruction. In all of the previous studies, numerical metrics were used to measure the compression error and through it, the coding quality. Digital hologram reconstructions are highly speckled and the speckle pattern is very sensitive to data changes. Hence, numerical quality metrics can be misleading. For example, for low compression ratios, a numerically significant coding error can have visually negligible effects. Yet, in several cases, it is of high interest to know how much lossy compression can be achieved, while maintaining the reconstruction quality at visually lossless levels. Using an experimental threshold estimation method, the staircase algorithm, we determined the highest compression ratio that was not perceptible to human observers for objects compressed with Dirac and MPEG-4 compression methods. This level of compression can be regarded as the point below which compression is perceptually lossless although physically the compression is lossy. It was found that up to 4 to 7.5 fold compression can be obtained with the above methods without any perceptible change in the appearance of video sequences.

  5. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  6. Simulations of electron transport and ignition for direct-drive fast-ignition targets

    NASA Astrophysics Data System (ADS)

    Solodov, A. A.; Anderson, K. S.; Betti, R.; Gotcheva, V.; Myatt, J.; Delettrez, J. A.; Skupsky, S.; Theobald, W.; Stoeckl, C.

    2008-11-01

    The performance of high-gain, fast-ignition fusion targets is investigated using one-dimensional hydrodynamic simulations of implosion and two-dimensional (2D) hybrid fluid-particle simulations of hot-electron transport, ignition, and burn. The 2D/3D hybrid-particle-in-cell code LSP [D. R. Welch et al., Nucl. Instrum. Methods Phys. Res. A 464, 134 (2001)] and the 2D fluid code DRACO [P. B. Radha et al., Phys. Plasmas 12, 056307 (2005)] are integrated to simulate the hot-electron transport and heating for direct-drive fast-ignition targets. LSP simulates the transport of hot electrons from the place where they are generated to the dense fuel core where their energy is absorbed. DRACO includes the physics required to simulate compression, ignition, and burn of fast-ignition targets. The self-generated resistive magnetic field is found to collimate the hot-electron beam, increase the coupling efficiency of hot electrons with the target, and reduce the minimum energy required for ignition. Resistive filamentation of the hot-electron beam is also observed. The minimum energy required for ignition is found for hot electrons with realistic angular spread and Maxwellian energy-distribution function.

  7. Numerical Study of High-Speed Droplet Impact on Surfaces and its Physical Cleaning Effects

    NASA Astrophysics Data System (ADS)

    Kondo, Tomoki; Ando, Keita

    2015-11-01

    Spurred by the demand for cleaning techniques of low environmental impact, one favors physical cleaning that does not rely on any chemicals. One of the promising candidates is based on water jets that often involve fission into droplet fragments and collide with target surfaces to which contaminant particles (often micron-sized or even smaller) stick. Hydrodynamic force (e.g., shearing and lifting) arising from the droplet impact will play a role to remove the particles, but its detailed mechanism is still unknown. To explore the role of high-speed droplet impact in physical cleaning, we solve compressible Navier-Stokes equations with a finite volume method that is designed to capture both shocks and material interfaces in accurate and robust manners. Water hammer and shear flow accompanied by high-speed droplet impact at a rigid wall is simulated to evaluate lifting force and rotating torque, which are relevant to the application of particle removal. For the simulation, we use the numerical code recently developed by Computational Flow Group lead by Tim Colonius at Caltech. The first author thanks Jomela Meng for her help in handling the code during his stay at Caltech.

  8. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  9. Three-dimensional integral imaging displays using a quick-response encoded elemental image array: an overview

    NASA Astrophysics Data System (ADS)

    Markman, A.; Javidi, B.

    2016-06-01

    Quick-response (QR) codes are barcodes that can store information such as numeric data and hyperlinks. The QR code can be scanned using a QR code reader, such as those built into smartphone devices, revealing the information stored in the code. Moreover, the QR code is robust to noise, rotation, and illumination when scanning due to error correction built in the QR code design. Integral imaging is an imaging technique used to generate a three-dimensional (3D) scene by combining the information from two-dimensional (2D) elemental images (EIs) each with a different perspective of a scene. Transferring these 2D images in a secure manner can be difficult. In this work, we overview two methods to store and encrypt EIs in multiple QR codes. The first method uses run-length encoding with Huffman coding and the double-random-phase encryption (DRPE) to compress and encrypt an EI. This information is then stored in a QR code. An alternative compression scheme is to perform photon-counting on the EI prior to compression. Photon-counting is a non-linear transformation of data that creates redundant information thus improving image compression. The compressed data is encrypted using the DRPE. Once information is stored in the QR codes, it is scanned using a smartphone device. The information scanned is decompressed and decrypted and an EI is recovered. Once all EIs have been recovered, a 3D optical reconstruction is generated.

  10. Code Verification of the HIGRAD Computational Fluid Dynamics Solver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.

    2012-05-04

    The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verificationmore » test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.« less

  11. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  12. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    PubMed

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  13. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  14. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  15. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Howard, W M; Fried, L E; Vitello, P A

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data,more » including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.« less

  16. Numerical Simulations of Laser-Driven Microflyer Plates

    NASA Astrophysics Data System (ADS)

    Colvin, Jeffrey D.; Frank, Alan M.; Lee, Ronald S.; Remington, Bruce A.

    2000-10-01

    Experiments conducted in the US and France have accelerated few-micron-thick foils of aluminum to velocities of 3 - 5 km/s using 25 - 50 J/cm^2 of 1-μm laser light (1,2). These microflyer plates are not too dissimilar in size and velocity from interplanetary dust particles (3). We are performing numerical simulations of these experiments with the 2-D radiation-hydrodynamics code LASNEX (4), incorporating a model for low-fluence electromagnetic wave reflection and absorption in metals, with the objective of determining the physical processes important to optimizing the flyer design. We will discuss our preliminary findings, including the efficacy of a thermal insulation layer and the role played by the substrate on which the flyer is mounted. (1) W.M. Trott, R.E. Setchell, and A.V. Farnsworth, Jr., in Shock Compression of Condensed Matter-1999, ed. M.D. Furnish, L.C. Chhabildas, and R.S. Hixson, AIP, 2000, pp. 1203-06. (2) J. L. Labaste, M. Doucet, and P. Joubert, in Shock Compression of Condensed Matter-1995, ed. S.C. Schmidt and W.C. Tao, AIP, 1996, pp. 1221-24. (3) W.W. Anderson and T.J. Ahrens, J. Geophys. Res. 99, 2063 (1994). (4) G. B. Zimmerman and W. L. Kruer, Comments Plasma Phys. Controlled Fusion 2, 51 (1975).

  17. A Lossless hybrid wavelet-fractal compression for welding radiographic images.

    PubMed

    Mekhalfa, Faiza; Avanaki, Mohammad R N; Berkani, Daoud

    2016-01-01

    In this work a lossless wavelet-fractal image coder is proposed. The process starts by compressing and decompressing the original image using wavelet transformation and fractal coding algorithm. The decompressed image is removed from the original one to obtain a residual image which is coded by using Huffman algorithm. Simulation results show that with the proposed scheme, we achieve an infinite peak signal to noise ratio (PSNR) with higher compression ratio compared to typical lossless method. Moreover, the use of wavelet transform speeds up the fractal compression algorithm by reducing the size of the domain pool. The compression results of several welding radiographic images using the proposed scheme are evaluated quantitatively and compared with the results of Huffman coding algorithm.

  18. SPAMCART: a code for smoothed particle Monte Carlo radiative transfer

    NASA Astrophysics Data System (ADS)

    Lomax, O.; Whitworth, A. P.

    2016-10-01

    We present a code for generating synthetic spectral energy distributions and intensity maps from smoothed particle hydrodynamics simulation snapshots. The code is based on the Lucy Monte Carlo radiative transfer method, I.e. it follows discrete luminosity packets as they propagate through a density field, and then uses their trajectories to compute the radiative equilibrium temperature of the ambient dust. The sources can be extended and/or embedded, and discrete and/or diffuse. The density is not mapped on to a grid, and therefore the calculation is performed at exactly the same resolution as the hydrodynamics. We present two example calculations using this method. First, we demonstrate that the code strictly adheres to Kirchhoff's law of radiation. Secondly, we present synthetic intensity maps and spectra of an embedded protostellar multiple system. The algorithm uses data structures that are already constructed for other purposes in modern particle codes. It is therefore relatively simple to implement.

  19. Wavelet-based compression of M-FISH images.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Wu, Qiang; Castleman, Kenneth R

    2005-05-01

    Multiplex fluorescence in situ hybridization (M-FISH) is a recently developed technology that enables multi-color chromosome karyotyping for molecular cytogenetic analysis. Each M-FISH image set consists of a number of aligned images of the same chromosome specimen captured at different optical wavelength. This paper presents embedded M-FISH image coding (EMIC), where the foreground objects/chromosomes and the background objects/images are coded separately. We first apply critically sampled integer wavelet transforms to both the foreground and the background. We then use object-based bit-plane coding to compress each object and generate separate embedded bitstreams that allow continuous lossy-to-lossless compression of the foreground and the background. For efficient arithmetic coding of bit planes, we propose a method of designing an optimal context model that specifically exploits the statistical characteristics of M-FISH images in the wavelet domain. Our experiments show that EMIC achieves nearly twice as much compression as Lempel-Ziv-Welch coding. EMIC also performs much better than JPEG-LS and JPEG-2000 for lossless coding. The lossy performance of EMIC is significantly better than that of coding each M-FISH image with JPEG-2000.

  20. nIFTY galaxy cluster simulations - III. The similarity and diversity of galaxies and subhaloes

    NASA Astrophysics Data System (ADS)

    Elahi, Pascal J.; Knebe, Alexander; Pearce, Frazer R.; Power, Chris; Yepes, Gustavo; Cui, Weiguang; Cunnama, Daniel; Kay, Scott T.; Sembolini, Federico; Beck, Alexander M.; Davé, Romeel; February, Sean; Huang, Shuiyao; Katz, Neal; McCarthy, Ian G.; Murante, Giuseppe; Perret, Valentin; Puchwein, Ewald; Saro, Alexandro; Teyssier, Romain

    2016-05-01

    We examine subhaloes and galaxies residing in a simulated Λ cold dark matter galaxy cluster (M^crit_{200}=1.1× 10^{15} h^{-1} M_{⊙}) produced by hydrodynamical codes ranging from classic smooth particle hydrodynamics (SPH), newer SPH codes, adaptive and moving mesh codes. These codes use subgrid models to capture galaxy formation physics. We compare how well these codes reproduce the same subhaloes/galaxies in gravity-only, non-radiative hydrodynamics and full feedback physics runs by looking at the overall subhalo/galaxy distribution and on an individual object basis. We find that the subhalo population is reproduced to within ≲10 per cent for both dark matter only and non-radiative runs, with individual objects showing code-to-code scatter of ≲0.1 dex, although the gas in non-radiative simulations shows significant scatter. Including feedback physics significantly increases the diversity. Subhalo mass and Vmax distributions vary by ≈20 per cent. The galaxy populations also show striking code-to-code variations. Although the Tully-Fisher relation is similar in almost all codes, the number of galaxies with 109 h- 1 M⊙ ≲ M* ≲ 1012 h- 1 M⊙ can differ by a factor of 4. Individual galaxies show code-to-code scatter of ˜0.5 dex in stellar mass. Moreover, systematic differences exist, with some codes producing galaxies 70 per cent smaller than others. The diversity partially arises from the inclusion/absence of active galactic nucleus feedback. Our results combined with our companion papers demonstrate that subgrid physics is not just subject to fine-tuning, but the complexity of building galaxies in all environments remains a challenge. We argue that even basic galaxy properties, such as stellar mass to halo mass, should be treated with errors bars of ˜0.2-0.4 dex.

  1. Multi-dimensional computer simulation of MHD combustor hydrodynamics

    NASA Astrophysics Data System (ADS)

    Berry, G. F.; Chang, S. L.; Lottes, S. A.; Rimkus, W. A.

    1991-04-01

    Argonne National Laboratory is investigating the nonreacting jet gas mixing patterns in an MHD second stage combustor by using a 2-D multiphase hydrodynamics computer program and a 3-D single phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A 2-D steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross stream gas flow. A 3-D code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell.

  2. Study and simulation of low rate video coding schemes

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Kipp, G.

    1992-01-01

    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.

  3. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    NASA Astrophysics Data System (ADS)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  4. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  5. Compressible viscous flows generated by oscillating flexible cylinders

    NASA Astrophysics Data System (ADS)

    Van Eysden, Cornelis A.; Sader, John E.

    2009-01-01

    The fluid dynamics of oscillating elastic beams underpin the operation of many modern technological devices ranging from micromechanical sensors to the atomic force microscope. While viscous effects are widely acknowledged to have a strong influence on these dynamics, fluid compressibility is commonly neglected. Here, we theoretically study the three-dimensional flow fields that are generated by the motion of flexible cylinders immersed in viscous compressible fluids and discuss the implications of compressibility in practice. We consider cylinders of circular cross section and flat blades of zero thickness that are executing flexural and torsional oscillations of arbitrary wave number. Exact analytical solutions are derived for these flow fields and their resulting hydrodynamic loads.

  6. Statistical Relations for Yield Degradation in Inertial Confinement Fusion

    NASA Astrophysics Data System (ADS)

    Woo, K. M.; Betti, R.; Patel, D.; Gopalaswamy, V.

    2017-10-01

    In inertial confinement fusion (ICF), the yield-over-clean (YOC) is a quantity commonly used to assess the performance of an implosion with respect to the degradation caused by asymmetries. The YOC also determines the Lawson parameter used to identify the onset of ignition and the level of alpha heating in ICF implosions. In this work, we show that the YOC is a unique function of the residual kinetic energy in the compressed shell (with respect to the 1-D case) regardless of the asymmetry spectrum. This result is derived using a simple model of the deceleration phase as well as through an extensive set of 3-D radiation-hydrodynamics simulations using the code DEC3D. The latter has been recently upgraded to include a 3-D spherical moving mesh, the HYPRE solver for 3-D radiation transport and piecewise-parabolic method for robust shock-capturing hydrodynamic simulations. DEC3D is used to build a synthetic single-mode database to study the behavior of yield degradation caused by Rayleigh-Taylor instabilities in the deceleration phase. The relation between YOC and residual kinetic energy is compared with the result in an adiabatic implosion model. The statistical expression of YOC is also applied to the ignition criterion in the presence of multidimensional nonuniformities. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  7. Survey Of Lossless Image Coding Techniques

    NASA Astrophysics Data System (ADS)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.

  8. RADHOT: A Radiation Hydrodynamics Code for Weapon Effects Calculation.

    DTIC Science & Technology

    1981-03-01

    h4A ( :: [ l), t.110 )" *- 7470 -C - C... C LUMI1LTI A F ’ :: ISUfI ----- --------------- 7480= P2 GM I ’: ;,,l. II 7490C:, A ......... ’ R..E I:I ’ S...AD-AlO 637 AIR FORCE INST OF TECH WRIGHTPATTERSON AFL O SCHOOETC F /8 12/ RADHOT: A RADIATION HYDRODYNAMICS CODE FOR WEAPON EFFECTS CALCU--ETC(U...change of internal energy due to radiation atj rad F monochromatic flux V F -, F inward and outward-going monochromatic fluxes at Va cell boundary F -, F1

  9. Terminal Ballistic Application of Hydrodynamic Computer Code Calculations.

    DTIC Science & Technology

    1977-04-01

    F1’T.D—AO*I 065 BALLISTIC RESEARCH LABS ABnoflN PR0VIM eRotic j~o NTERMiNAL BALLISIIC APPLICATION OF HYDRODYNAMIC C~I~~U7ER COVE CA—ET C(U) I APR 77...this short- coming of the code, design solutions using a combined calculational and empirical design procedure were tried . 18 --- - -- -- - --- -rn...In this calculation , the exp losive was conf ined on its periphery by a steel casing. The calculated liner shape is shown at 18 m icroseconds af

  10. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    NASA Astrophysics Data System (ADS)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  11. Modeling hydrodynamics, water quality, and benthic processes to predict ecological effects in Narragansett Bay

    EPA Science Inventory

    The environmental fluid dynamics code (EFDC) was used to study the three dimensional (3D) circulation, water quality, and ecology in Narragansett Bay, RI. Predictions of the Bay hydrodynamics included the behavior of the water surface elevation, currents, salinity, and temperatur...

  12. Multi-Dimensional Full Boltzmann-Neutrino-Radiation Hydrodynamic Simulations and Their Detailed Comparisons with Monte-Carlo Methods in Core Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Nagakura, H.; Richers, S.; Ott, C. D.; Iwakami, W.; Furusawa, S.; Sumiyoshi, K.; Yamada, S.; Matsufuru, H.; Imakura, A.

    2016-10-01

    We have developed a 7-dimensional Full Boltzmann-neutrino-radiation-hydrodynamical code and carried out ab-initio axisymmetric CCSNe simulations. I will talk about main results of our simulations and also discuss current ongoing projects.

  13. Non-US data compression and coding research. FASAC Technical Assessment Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, R.M.; Cohn, M.; Craver, L.W.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity,more » though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.« less

  14. Production code control system for hydrodynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration managementmore » system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.« less

  15. Temperature anomalies of shock and isentropic waves of quark-hadron phase transition

    NASA Astrophysics Data System (ADS)

    Konyukhov, A. V.; Iosilevskiy, I. L.; Levashov, P. R.; Likhachev, A. P.

    2018-01-01

    In this work, we consider a phenomenological equation of state, which combinesstatistical description for hadron gas and a bag-model-based approach for the quark-gluon plasma. The equation of state is based on the excluded volume method in its thermodynamically consistent variant from Satarov et al [2009 Phys. At. Nucl. 72 1390]. The characteristic shape of the Taub adiabats and isentropes in the phase diagram is affected by the anomalous pressure-temperature dependence along the curve of phase equilibrium. The adiabats have kink points at the boundary of the two-phase region, inside which the temperature decreases with compression. Thermodynamic properties of matter observed in the quark-hadron phase transition region lead to hydrodynamic anomalies (in particular, to the appearance of composite compression and rarefaction waves). On the basis of relativistic hydrodynamics equations we investigate and discuss the structure and anomalous temperature behavior in these waves.

  16. An adaptive technique to maximize lossless image data compression of satellite images

    NASA Technical Reports Server (NTRS)

    Stewart, Robert J.; Lure, Y. M. Fleming; Liou, C. S. Joe

    1994-01-01

    Data compression will pay an increasingly important role in the storage and transmission of image data within NASA science programs as the Earth Observing System comes into operation. It is important that the science data be preserved at the fidelity the instrument and the satellite communication systems were designed to produce. Lossless compression must therefore be applied, at least, to archive the processed instrument data. In this paper, we present an analysis of the performance of lossless compression techniques and develop an adaptive approach which applied image remapping, feature-based image segmentation to determine regions of similar entropy and high-order arithmetic coding to obtain significant improvements over the use of conventional compression techniques alone. Image remapping is used to transform the original image into a lower entropy state. Several techniques were tested on satellite images including differential pulse code modulation, bi-linear interpolation, and block-based linear predictive coding. The results of these experiments are discussed and trade-offs between computation requirements and entropy reductions are used to identify the optimum approach for a variety of satellite images. Further entropy reduction can be achieved by segmenting the image based on local entropy properties then applying a coding technique which maximizes compression for the region. Experimental results are presented showing the effect of different coding techniques for regions of different entropy. A rule-base is developed through which the technique giving the best compression is selected. The paper concludes that maximum compression can be achieved cost effectively and at acceptable performance rates with a combination of techniques which are selected based on image contextual information.

  17. A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.

    2000-10-01

    Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.

  18. Subjective evaluation of compressed image quality

    NASA Astrophysics Data System (ADS)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  19. Wavelet-based scalable L-infinity-oriented compression.

    PubMed

    Alecu, Alin; Munteanu, Adrian; Cornelis, Jan P H; Schelkens, Peter

    2006-09-01

    Among the different classes of coding techniques proposed in literature, predictive schemes have proven their outstanding performance in near-lossless compression. However, these schemes are incapable of providing embedded L(infinity)-oriented compression, or, at most, provide a very limited number of potential L(infinity) bit-stream truncation points. We propose a new multidimensional wavelet-based L(infinity)-constrained scalable coding framework that generates a fully embedded L(infinity)-oriented bit stream and that retains the coding performance and all the scalability options of state-of-the-art L2-oriented wavelet codecs. Moreover, our codec instantiation of the proposed framework clearly outperforms JPEG2000 in L(infinity) coding sense.

  20. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  1. Dynamic code block size for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Tsai, Ping-Sing; LeCornec, Yann

    2008-02-01

    Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.

  2. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.

  3. Integrated modelling framework for short pulse high energy density physics experiments

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Hughes, S. J.; Ramsay, M. G.

    2016-03-01

    Modelling experimental campaigns on the Orion laser at AWE, and developing a viable point-design for fast ignition (FI), calls for a multi-scale approach; a complete description of the problem would require an extensive range of physics which cannot realistically be included in a single code. For modelling the laser-plasma interaction (LPI) we need a fine mesh which can capture the dispersion of electromagnetic waves, and a kinetic model for each plasma species. In the dense material of the bulk target, away from the LPI region, collisional physics dominates. The transport of hot particles generated by the action of the laser is dependent on their slowing and stopping in the dense material and their need to draw a return current. These effects will heat the target, which in turn influences transport. On longer timescales, the hydrodynamic response of the target will begin to play a role as the pressure generated from isochoric heating begins to take effect. Recent effort at AWE [1] has focussed on the development of an integrated code suite based on: the particle in cell code EPOCH, to model LPI; the Monte-Carlo electron transport code THOR, to model the onward transport of hot electrons; and the radiation hydrodynamics code CORVUS, to model the hydrodynamic response of the target. We outline the methodology adopted, elucidate on the advantages of a robustly integrated code suite compared to a single code approach, demonstrate the integrated code suite's application to modelling the heating of buried layers on Orion, and assess the potential of such experiments for the validation of modelling capability in advance of more ambitious HEDP experiments, as a step towards a predictive modelling capability for FI.

  4. A theoretical study of hydrodynamic cavitation.

    PubMed

    Arrojo, S; Benito, Y

    2008-03-01

    The optimization of hydrodynamic cavitation as an AOP requires identifying the key parameters and studying their effects on the process. Specific simulations of hydrodynamic bubbles reveal that time scales play a major role on the process. Rarefaction/compression periods generate a number of opposing effects which have demonstrated to be quantitatively different from those found in ultrasonic cavitation. Hydrodynamic cavitation can be upscaled and offers an energy efficient way of generating cavitation. On the other hand, the large characteristic time scales hinder bubble collapse and generate a low number of cavitation cycles per unit time. By controlling the pressure pulse through a flexible cavitation chamber design these limitations can be partially compensated. The chemical processes promoted by this technique are also different from those found in ultrasonic cavitation. Properties such as volatility or hydrophobicity determine the potential applicability of HC and therefore have to be taken into account.

  5. Onboard Image Processing System for Hyperspectral Sensor

    PubMed Central

    Hihara, Hiroki; Moritani, Kotaro; Inoue, Masao; Hoshi, Yoshihiro; Iwasaki, Akira; Takada, Jun; Inada, Hitomi; Suzuki, Makoto; Seki, Taeko; Ichikawa, Satoshi; Tanii, Jun

    2015-01-01

    Onboard image processing systems for a hyperspectral sensor have been developed in order to maximize image data transmission efficiency for large volume and high speed data downlink capacity. Since more than 100 channels are required for hyperspectral sensors on Earth observation satellites, fast and small-footprint lossless image compression capability is essential for reducing the size and weight of a sensor system. A fast lossless image compression algorithm has been developed, and is implemented in the onboard correction circuitry of sensitivity and linearity of Complementary Metal Oxide Semiconductor (CMOS) sensors in order to maximize the compression ratio. The employed image compression method is based on Fast, Efficient, Lossless Image compression System (FELICS), which is a hierarchical predictive coding method with resolution scaling. To improve FELICS’s performance of image decorrelation and entropy coding, we apply a two-dimensional interpolation prediction and adaptive Golomb-Rice coding. It supports progressive decompression using resolution scaling while still maintaining superior performance measured as speed and complexity. Coding efficiency and compression speed enlarge the effective capacity of signal transmission channels, which lead to reducing onboard hardware by multiplexing sensor signals into a reduced number of compression circuits. The circuitry is embedded into the data formatter of the sensor system without adding size, weight, power consumption, and fabrication cost. PMID:26404281

  6. A Comparison of LBG and ADPCM Speech Compression Techniques

    NASA Astrophysics Data System (ADS)

    Bachu, Rajesh G.; Patel, Jignasa; Barkana, Buket D.

    Speech compression is the technology of converting human speech into an efficiently encoded representation that can later be decoded to produce a close approximation of the original signal. In all speech there is a degree of predictability and speech coding techniques exploit this to reduce bit rates yet still maintain a suitable level of quality. This paper is a study and implementation of Linde-Buzo-Gray Algorithm (LBG) and Adaptive Differential Pulse Code Modulation (ADPCM) algorithms to compress speech signals. In here we implemented the methods using MATLAB 7.0. The methods we used in this study gave good results and performance in compressing the speech and listening tests showed that efficient and high quality coding is achieved.

  7. New methods and astrophysical applications of adaptive mesh fluid simulations

    NASA Astrophysics Data System (ADS)

    Wang, Peng

    The formation of stars, galaxies and supermassive black holes are among the most interesting unsolved problems in astrophysics. Those problems are highly nonlinear and involve enormous dynamical ranges. Thus numerical simulations with spatial adaptivity are crucial in understanding those processes. In this thesis, we discuss the development and application of adaptive mesh refinement (AMR) multi-physics fluid codes to simulate those nonlinear structure formation problems. To simulate the formation of star clusters, we have developed an AMR magnetohydrodynamics (MHD) code, coupled with radiative cooling. We have also developed novel algorithms for sink particle creation, accretion, merging and outflows, all of which are coupled with the fluid algorithms using operator splitting. With this code, we have been able to perform the first AMR-MHD simulation of star cluster formation for several dynamical times, including sink particle and protostellar outflow feedbacks. The results demonstrated that protostellar outflows can drive supersonic turbulence in dense clumps and explain the observed slow and inefficient star formation. We also suggest that global collapse rate is the most important factor in controlling massive star accretion rate. In the topics of galaxy formation, we discuss the results of three projects. In the first project, using cosmological AMR hydrodynamics simulations, we found that isolated massive star still forms in cosmic string wakes even though the mega-parsec scale structure has been perturbed significantly by the cosmic strings. In the second project, we calculated the dynamical heating rate in galaxy formation. We found that by balancing our heating rate with the atomic cooling rate, it gives a critical halo mass which agrees with the result of numerical simulations. This demonstrates that the effect of dynamical heating should be put into semi-analytical works in the future. In the third project, using our AMR-MHD code coupled with radiative cooling module, we performed the first MHD simulations of disk galaxy formation. We find that the initial magnetic fields are quickly amplified to Milky-Way strength in a self-regulated way with amplification rate roughly one e-folding per orbit. This suggests that Milky Way strength magnetic field might be common in high redshift disk galaxies. We have also developed AMR relativistic hydrodynamics code to simulate black hole relativistic jets. We discuss the coupling of the AMR framework with various relativistic solvers and conducted extensive algorithmic comparisons. Via various test problems, we emphasize the importance of resolution studies in relativistic flow simulations because extremely high resolution is required especially when shear flows are present in the problem. Then we present the results of 3D simulations of supermassive black hole jets propagation and gamma ray burst jet breakout. Resolution studies of the two 3D jets simulations further highlight the need of high resolutions to calculate accurately relativistic flow problems. Finally, to push forward the kind of simulations described above, we need faster codes with more physics included. We describe an implementation of compressible inviscid fluid solvers with AMR on Graphics Processing Units (GPU) using NVIDIA's CUDA. We show that the class of high resolution shock capturing schemes can be mapped naturally on this architecture. For both uniform and adaptive simulations, we achieve an overall speedup of approximately 10 times faster execution on one Quadro FX 5600 GPU as compared to a single 3 GHz Intel core on the host computer. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case.

  8. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  9. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  10. An evaluation of the effect of JPEG, JPEG2000, and H.264/AVC on CQR codes decoding process

    NASA Astrophysics Data System (ADS)

    Vizcarra Melgar, Max E.; Farias, Mylène C. Q.; Zaghetto, Alexandre

    2015-02-01

    This paper presents a binarymatrix code based on QR Code (Quick Response Code), denoted as CQR Code (Colored Quick Response Code), and evaluates the effect of JPEG, JPEG2000 and H.264/AVC compression on the decoding process. The proposed CQR Code has three additional colors (red, green and blue), what enables twice as much storage capacity when compared to the traditional black and white QR Code. Using the Reed-Solomon error-correcting code, the CQR Code model has a theoretical correction capability of 38.41%. The goal of this paper is to evaluate the effect that degradations inserted by common image compression algorithms have on the decoding process. Results show that a successful decoding process can be achieved for compression rates up to 0.3877 bits/pixel, 0.1093 bits/pixel and 0.3808 bits/pixel for JPEG, JPEG2000 and H.264/AVC formats, respectively. The algorithm that presents the best performance is the H.264/AVC, followed by the JPEG2000, and JPEG.

  11. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  12. Simulation studies of hydrodynamic aspects of magneto-inertial fusion and high order adaptive algorithms for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Wu, Lingling

    Three-dimensional simulations of the formation and implosion of plasma liners for the Plasma Jet Induced Magneto Inertial Fusion (PJMIF) have been performed using multiscale simulation technique based on the FronTier code. In the PJMIF concept, a plasma liner, formed by merging of a large number of radial, highly supersonic plasma jets, implodes on the target in the form of two compact plasma toroids, and compresses it to conditions of the nuclear fusion ignition. The propagation of a single jet with Mach number 60 from the plasma gun to the merging point was studied using the FronTier code. The simulation result was used as input to the 3D jet merger problem. The merger of 144, 125, and 625 jets and the formation and heating of plasma liner by compression waves have been studied and compared with recent theoretical predictions. The main result of the study is the prediction of the average Mach number reduction and the description of the liner structure and properties. We have also compared the effect of different merging radii. Spherically symmetric simulations of the implosion of plasma liners and compression of plasma targets have also been performed using the method of front tracking. The cases of single deuterium and xenon liners and double layer deuterium - xenon liners compressing various deuterium-tritium targets have been investigated, optimized for maximum fusion energy gains, and compared with theoretical predictions and scaling laws of [P. Parks, On the efficacy of imploding plasma liners for magnetized fusion target compression, Phys. Plasmas 15, 062506 (2008)]. In agreement with the theory, the fusion gain was significantly below unity for deuterium - tritium targets compressed by Mach 60 deuterium liners. In the most optimal setup for a given chamber size that contained a target with the initial radius of 20 cm compressed by 10 cm thick, Mach 60 xenon liner, the target ignition and fusion energy gain of 10 was achieved. Simulations also showed that composite deuterium - xenon liners reduce the energy gain due to lower target compression rates. The effect of heating of targets by alpha particles on the fusion energy gain has also been investigated. The study of the dependence of the ram pressure amplification on radial compressibility showed a good agreement with the theory. The study concludes that a liner with higher Mach number and lower adiabatic index gamma (the radio of specific heats) will generate higher ram pressure amplification and higher fusion energy gain. We implemented a second order embedded boundary method for the Maxwell equations in geometrically complex domains. The numerical scheme is second order in both space and time. Comparing to the first order stair-step approximation of complex geometries within the FDTD method, this method can avoid spurious solution introduced by the stair step approximation. Unlike the finite element method and the FE-FD hybrid method, no triangulation is needed for this scheme. This method preserves the simplicity of the embedded boundary method and it is easy to implement. We will also propose a conservative (symplectic) fourth order scheme for uniform geometry boundary.

  13. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  14. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  15. New equation of state models for hydrodynamic applications

    NASA Astrophysics Data System (ADS)

    Young, David A.; Barbee, Troy W.; Rogers, Forrest J.

    1998-07-01

    Two new theoretical methods for computing the equation of state of hot, dense matter are discussed. The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.

  16. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  17. Eulerian and Lagrangian Plasma Jet Modeling for the Plasma Liner Experiment

    NASA Astrophysics Data System (ADS)

    Hatcher, Richard; Cassibry, Jason; Stanic, Milos; Loverich, John; Hakim, Ammar

    2011-10-01

    The Plasma Liner Experiment (PLX) aims to demonstrate the feasibility of using spherically-convergent plasma jets to from an imploding plasma liner. Our group has modified two hydrodynamic simulation codes to include radiative loss, tabular equations of state (EOS), and thermal transport. Nautilus, created by TechX Corporation, is a finite-difference Eulerian code which solves the MHD equations formulated as systems of hyperbolic conservation laws. The other is SPHC, a smoothed particle hydrodynamics code produced by Stellingwerf Consulting. Use of the Lagrangian fluid particle approach of SPH is motivated by the ability to accurately track jet interfaces, the plasma vacuum boundary, and mixing of various layers, but Eulerian codes have been in development for much longer and have better shock capturing. We validate these codes against experimental measurements of jet propagation, expansion, and merging of two jets. Precursor jets are observed to form at the jet interface. Conditions that govern evolution of two and more merging jets are explored.

  18. Computation of the Hydrodynamic Forces and Moments on a Body of Revolution with and without Appendages

    DTIC Science & Technology

    1991-08-01

    SUPPLEMENTARY NOTATION 1 COSA. CODES 18 SUBJECT TERMS (,ontnuo 0 ner of necessary Atdi, block n" mbr ) FIELD GROUP SUB.GROUP Submarine ’hyoroaynamic ’~ aDS...hydrodynamic forces and moments developed on the hull and appendages of a submerged vehicle is required for determining its stability, control, and...an approximate method has been developed to compute the hydrodynamic forces and moments for a submerged vehicle. As discussed in Reference 1, the

  19. Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Benítez-Llambay, Alejandro

    2017-12-01

    Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.

  20. Comparing AMR and SPH Cosmological Simulations. I. Dark Matter and Adiabatic Simulations

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian W.; Nagamine, Kentaro; Springel, Volker; Hernquist, Lars; Norman, Michael L.

    2005-09-01

    We compare two cosmological hydrodynamic simulation codes in the context of hierarchical galaxy formation: the Lagrangian smoothed particle hydrodynamics (SPH) code GADGET, and the Eulerian adaptive mesh refinement (AMR) code Enzo. Both codes represent dark matter with the N-body method but use different gravity solvers and fundamentally different approaches for baryonic hydrodynamics. The SPH method in GADGET uses a recently developed ``entropy conserving'' formulation of SPH, while for the mesh-based Enzo two different formulations of Eulerian hydrodynamics are employed: the piecewise parabolic method (PPM) extended with a dual energy formulation for cosmology, and the artificial viscosity-based scheme used in the magnetohydrodynamics code ZEUS. In this paper we focus on a comparison of cosmological simulations that follow either only dark matter, or also a nonradiative (``adiabatic'') hydrodynamic gaseous component. We perform multiple simulations using both codes with varying spatial and mass resolution with identical initial conditions. The dark matter-only runs agree generally quite well provided Enzo is run with a comparatively fine root grid and a low overdensity threshold for mesh refinement, otherwise the abundance of low-mass halos is suppressed. This can be readily understood as a consequence of the hierarchical particle-mesh algorithm used by Enzo to compute gravitational forces, which tends to deliver lower force resolution than the tree-algorithm of GADGET at early times before any adaptive mesh refinement takes place. At comparable force resolution we find that the latter offers substantially better performance and lower memory consumption than the present gravity solver in Enzo. In simulations that include adiabatic gasdynamics we find general agreement in the distribution functions of temperature, entropy, and density for gas of moderate to high overdensity, as found inside dark matter halos. However, there are also some significant differences in the same quantities for gas of lower overdensity. For example, at z=3 the fraction of cosmic gas that has temperature logT>0.5 is ~80% for both Enzo ZEUS and GADGET, while it is 40%-60% for Enzo PPM. We argue that these discrepancies are due to differences in the shock-capturing abilities of the different methods. In particular, we find that the ZEUS implementation of artificial viscosity in Enzo leads to some unphysical heating at early times in preshock regions. While this is apparently a significantly weaker effect in GADGET, its use of an artificial viscosity technique may also make it prone to some excess generation of entropy that should be absent in Enzo PPM. Overall, the hydrodynamical results for GADGET are bracketed by those for Enzo ZEUS and Enzo PPM but are closer to Enzo ZEUS.

  1. An Energy-Efficient Compressive Image Coding for Green Internet of Things (IoT).

    PubMed

    Li, Ran; Duan, Xiaomeng; Li, Xu; He, Wei; Li, Yanling

    2018-04-17

    Aimed at a low-energy consumption of Green Internet of Things (IoT), this paper presents an energy-efficient compressive image coding scheme, which provides compressive encoder and real-time decoder according to Compressive Sensing (CS) theory. The compressive encoder adaptively measures each image block based on the block-based gradient field, which models the distribution of block sparse degree, and the real-time decoder linearly reconstructs each image block through a projection matrix, which is learned by Minimum Mean Square Error (MMSE) criterion. Both the encoder and decoder have a low computational complexity, so that they only consume a small amount of energy. Experimental results show that the proposed scheme not only has a low encoding and decoding complexity when compared with traditional methods, but it also provides good objective and subjective reconstruction qualities. In particular, it presents better time-distortion performance than JPEG. Therefore, the proposed compressive image coding is a potential energy-efficient scheme for Green IoT.

  2. Comment on "Proposal of a critical test of the Navier-Stokes-Fourier paradigm for compressible fluid continua".

    PubMed

    Felderhof, B U

    2013-08-01

    Recently, a critical test of the Navier-Stokes-Fourier equations for compressible fluid continua was proposed [H. Brenner, Phys. Rev. E 87, 013014 (2013)]. It was shown that the equations of bivelocity hydrodynamics imply that a compressible fluid in an isolated rotating circular cylinder attains a nonequilibrium steady state with a nonuniform temperature increasing radially with distance from the axis. We demonstrate that statistical mechanical arguments, involving Hamiltonian dynamics and ergodicity due to irregularity of the wall, lead instead to a thermal equilibrium state with uniform temperature. This is the situation to be expected in experiment.

  3. DSMC Studies of the Richtmyer-Meshkov Instability

    NASA Astrophysics Data System (ADS)

    Gallis, M. A.; Koehler, T. P.; Torczynski, J. R.

    2014-11-01

    A new exascale-capable Direct Simulation Monte Carlo (DSMC) code, SPARTA, developed to be highly efficient on massively parallel computers, has extended the applicability of DSMC to challenging, transient three-dimensional problems in the continuum regime. Because DSMC inherently accounts for compressibility, viscosity, and diffusivity, it has the potential to improve the understanding of the mechanisms responsible for hydrodynamic instabilities. Here, the Richtmyer-Meshkov instability at the interface between two gases was studied parametrically using SPARTA. Simulations performed on Sequoia, an IBM Blue Gene/Q supercomputer at Lawrence Livermore National Laboratory, are used to investigate various Atwood numbers (0.33-0.94) and Mach numbers (1.2-12.0) for two-dimensional and three-dimensional perturbations. Comparisons with theoretical predictions demonstrate that DSMC accurately predicts the early-time growth of the instability. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Deleterious effects of nonthermal electrons in shock ignition concept.

    PubMed

    Nicolaï, Ph; Feugeas, J-L; Touati, M; Ribeyre, X; Gus'kov, S; Tikhonchuk, V

    2014-03-01

    Shock ignition concept is a promising approach to inertial confinement fusion that may allow obtaining high fusion energy gains with the existing laser technology. However, the spike driving laser intensities in the range of 1-10 PW/cm2 produces the energetic electrons that may have a significant effect on the target performance. The hybrid numerical simulations including a radiation hydrodynamic code coupled to a rapid Fokker-Planck module are used to asses the role of hot electrons in the shock generation and the target preheat in the time scale of 100 ps and spatial scale of 100 μm. It is shown that depending on the electron energy distribution and the target density profile the hot electrons can either increase the shock amplitude or preheat the imploding shell. In particular, the exponential electron energy spectrum corresponding to the temperature of 30 keV in the present HiPER target design preheats the deuterium-tritium shell and jeopardizes its compression. Ways of improving the target performance are suggested.

  5. High-Energy Space Propulsion Based on Magnetized Target Fusion

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. F.; Landrum, D. B.; Freeze, B.; Kirkpatrick, R. C.; Gerrish, H.; Schmidt, G. R.

    1999-01-01

    Magnetized target fusion is an approach in which a magnetized target plasma is compressed inertially by an imploding material wall. A high energy plasma liner may be used to produce the required implosion. The plasma liner is formed by the merging of a number of high momentum plasma jets converging towards the center of a sphere where two compact toroids have been introduced. Preliminary 3-D hydrodynamics modeling results using the SPHINX code of Los Alamos National Laboratory have been very encouraging and confirm earlier theoretical expectations. The concept appears ready for experimental exploration and plans for doing so are being pursued. In this talk, we explore conceptually how this innovative fusion approach could be packaged for space propulsion for interplanetary travel. We discuss the generally generic components of a baseline propulsion concept including the fusion engine, high velocity plasma accelerators, generators of compact toroids using conical theta pinches, magnetic nozzle, neutron absorption blanket, tritium reprocessing system, shock absorber, magnetohydrodynamic generator, capacitor pulsed power system, thermal management system, and micrometeorite shields.

  6. Role of boundary conditions in helicoidal flow collimation: Consequences for the von Kármán sodium dynamo experiment.

    PubMed

    Varela, J; Brun, S; Dubrulle, B; Nore, C

    2015-12-01

    We present hydrodynamic and magnetohydrodynamic (MHD) simulations of liquid sodium flow with the PLUTO compressible MHD code to investigate influence of magnetic boundary conditions on the collimation of helicoidal motions. We use a simplified cartesian geometry to represent the flow dynamics in the vicinity of one cavity of a multiblades impeller inspired by those used in the Von-Kármán-sodium (VKS) experiment. We show that the impinging of the large-scale flow upon the impeller generates a coherent helicoidal vortex inside the blades, located at a distance from the upstream blade piloted by the incident angle of the flow. This vortex collimates any existing magnetic field lines leading to an enhancement of the radial magnetic field that is stronger for ferromagnetic than for conducting blades. The induced magnetic field modifies locally the velocity fluctuations, resulting in an enhanced helicity. This process possibly explains why dynamo action is more easily triggered in the VKS experiment when using soft iron impellers.

  7. Use of zerotree coding in a high-speed pyramid image multiresolution decomposition

    NASA Astrophysics Data System (ADS)

    Vega-Pineda, Javier; Cabrera, Sergio D.; Lucero, Aldo

    1995-03-01

    A Zerotree (ZT) coding scheme is applied as a post-processing stage to avoid transmitting zero data in the High-Speed Pyramid (HSP) image compression algorithm. This algorithm has features that increase the capability of the ZT coding to give very high compression rates. In this paper the impact of the ZT coding scheme is analyzed and quantified. The HSP algorithm creates a discrete-time multiresolution analysis based on a hierarchical decomposition technique that is a subsampling pyramid. The filters used to create the image residues and expansions can be related to wavelet representations. According to the pixel coordinates and the level in the pyramid, N2 different wavelet basis functions of various sizes and rotations are linearly combined. The HSP algorithm is computationally efficient because of the simplicity of the required operations, and as a consequence, it can be very easily implemented with VLSI hardware. This is the HSP's principal advantage over other compression schemes. The ZT coding technique transforms the different quantized image residual levels created by the HSP algorithm into a bit stream. The use of ZT's compresses even further the already compressed image taking advantage of parent-child relationships (trees) between the pixels of the residue images at different levels of the pyramid. Zerotree coding uses the links between zeros along the hierarchical structure of the pyramid, to avoid transmission of those that form branches of all zeros. Compression performance and algorithm complexity of the combined HSP-ZT method are compared with those of the JPEG standard technique.

  8. Simulations of Fuel Assembly and Fast-Electron Transport in Integrated Fast-Ignition Experiments on OMEGA

    NASA Astrophysics Data System (ADS)

    Solodov, A. A.; Theobald, W.; Anderson, K. S.; Shvydky, A.; Epstein, R.; Betti, R.; Myatt, J. F.; Stoeckl, C.; Jarrott, L. C.; McGuffey, C.; Qiao, B.; Beg, F. N.; Wei, M. S.; Stephens, R. B.

    2013-10-01

    Integrated fast-ignition experiments on OMEGA benefit from improved performance of the OMEGA EP laser, including higher contrast, higher energy, and a smaller focus. Recent 8-keV, Cu-Kα flash radiography of cone-in-shell implosions and cone-tip breakout measurements showed good agreement with the 2-D radiation-hydrodynamic simulations using the code DRACO. DRACO simulations show that the fuel assembly can be further improved by optimizing the compression laser pulse, evacuating air from the shell, and by adjusting the material of the cone tip. This is found to delay the cone-tip breakout by ~220 ps and increase the core areal density from ~80 mg/cm2 in the current experiments to ~500 mg/cm2 at the time of the OMEGA EP beam arrival before the cone-tip breakout. Simulations using the code LSP of fast-electron transport in the recent integrated OMEGA experiments with Cu-doped shells will be presented. Cu-doping is added to probe the transport of fast electrons via their induced Cu K-shell fluorescent emission. This material is based upon work supported by the Department of Energy National Nuclear Security Administration DE-NA0001944 and the Office of Science under DE-FC02-04ER54789.

  9. Comparison of reversible methods for data compression

    NASA Astrophysics Data System (ADS)

    Heer, Volker K.; Reinfelder, Hans-Erich

    1990-07-01

    Widely differing methods for data compression described in the ACR-NEMA draft are used in medical imaging. In our contribution we will review various methods briefly and discuss the relevant advantages and disadvantages. In detail we evaluate 1st order DPCM pyramid transformation and S transformation. We compare as coding algorithms both fixed and adaptive Huffman coding and Lempel-Ziv coding. Our comparison is performed on typical medical images from CT MR DSA and DLR (Digital Luminescence Radiography). Apart from the achieved compression factors we take into account CPU time required and main memory requirement both for compression and for decompression. For a realistic comparison we have implemented the mentioned algorithms in the C program language on a MicroVAX II and a SPARC station 1. 2.

  10. Hydrodynamic study of plasma amplifiers for soft-x-ray lasers: a transition in hydrodynamic behavior for plasma columns with widths ranging from 20 μm to 2 mm.

    PubMed

    Oliva, Eduardo; Zeitoun, Philippe; Velarde, Pedro; Fajardo, Marta; Cassou, Kevin; Ros, David; Sebban, Stephan; Portillo, David; le Pape, Sebastien

    2010-11-01

    Plasma-based seeded soft-x-ray lasers have the potential to generate high energy and highly coherent short pulse beams. Due to their high density, plasmas created by the interaction of an intense laser with a solid target should store the highest amount of energy density among all plasma amplifiers. Our previous numerical work with a two-dimensional (2D) adaptive mesh refinement hydrodynamic code demonstrated that careful tailoring of plasma shapes leads to a dramatic enhancement of both soft-x-ray laser output energy and pumping efficiency. Benchmarking of our 2D hydrodynamic code in previous experiments demonstrated a high level of confidence, allowing us to perform a full study with the aim of the way for 10-100 μJ seeded soft-x-ray lasers. In this paper, we describe in detail the mechanisms that drive the hydrodynamics of plasma columns. We observed transitions between narrow plasmas, where very strong bidimensional flow prevents them from storing energy, to large plasmas that store a high amount of energy. Millimeter-sized plasmas are outstanding amplifiers, but they have the limitation of transverse lasing. In this paper, we provide a preliminary solution to this problem.

  11. The potential of imposed magnetic fields for enhancing ignition probability and fusion energy yield in indirect-drive inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Perkins, L. J.; Ho, D. D.-M.; Logan, B. G.; Zimmerman, G. B.; Rhodes, M. A.; Strozzi, D. J.; Blackfield, D. T.; Hawkins, S. A.

    2017-06-01

    We examine the potential that imposed magnetic fields of tens of Tesla that increase to greater than 10 kT (100 MGauss) under implosion compression may relax the conditions required for ignition and propagating burn in indirect-drive inertial confinement fusion (ICF) targets. This may allow the attainment of ignition, or at least significant fusion energy yields, in presently performing ICF targets on the National Ignition Facility (NIF) that today are sub-marginal for thermonuclear burn through adverse hydrodynamic conditions at stagnation [Doeppner et al., Phys. Rev. Lett. 115, 055001 (2015)]. Results of detailed two-dimensional radiation-hydrodynamic-burn simulations applied to NIF capsule implosions with low-mode shape perturbations and residual kinetic energy loss indicate that such compressed fields may increase the probability for ignition through range reduction of fusion alpha particles, suppression of electron heat conduction, and potential stabilization of higher-mode Rayleigh-Taylor instabilities. Optimum initial applied fields are found to be around 50 T. Given that the full plasma structure at capsule stagnation may be governed by three-dimensional resistive magneto-hydrodynamics, the formation of closed magnetic field lines might further augment ignition prospects. Experiments are now required to further assess the potential of applied magnetic fields to ICF ignition and burn on NIF.

  12. A GPL Relativistic Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Olvera, D.; Mendoza, S.

    We are currently building a free (in the sense of a GNU GPL license) 2DRHD code in order to be used for different astrophysical situations. Our final target will be to include strong gravitational fields and magnetic fields. We intend to form a large group of developers as it is usually done for GPL codes.

  13. xRage Equation of State

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grove, John W.

    2016-08-16

    The xRage code supports a variety of hydrodynamic equation of state (EOS) models. In practice these are generally accessed in the executing code via a pressure-temperature based table look up. This document will describe the various models supported by these codes and provide details on the algorithms used to evaluate the equation of state.

  14. Novel approach to multispectral image compression on the Internet

    NASA Astrophysics Data System (ADS)

    Zhu, Yanqiu; Jin, Jesse S.

    2000-10-01

    Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.

  15. The Role of Molecular Motors in the Mechanics of Active Gels and the Effects of Inertia, Hydrodynamic Interaction and Compressibility in Passive Microrheology

    DTIC Science & Technology

    2014-07-01

    to use the two-point microrheology technique 88 to measure the complex compressibility of biopolymers and cell components such as F-actin and...loads [23, 115]. Several works have used a continuum-mechanics level of description to model self- organization [64, 2] and rheology [79, 12, 33] of...morphogenesis [94]. Several works have used a continuum-mechanics level of description to model self- organization [64, 2] and rheology [79, 12, 33] of

  16. Coupling hydrodynamics with comoving frame radiative transfer. I. A unified approach for OB and WR stars

    NASA Astrophysics Data System (ADS)

    Sander, A. A. C.; Hamann, W.-R.; Todt, H.; Hainich, R.; Shenar, T.

    2017-07-01

    Context. For more than two decades, stellar atmosphere codes have been used to derive the stellar and wind parameters of massive stars. Although they have become a powerful tool and sufficiently reproduce the observed spectral appearance, they can hardly be used for more than measuring parameters. One major obstacle is their inconsistency between the calculated radiation field and the wind stratification due to the usage of prescribed mass-loss rates and wind-velocity fields. Aims: We present the concepts for a new generation of hydrodynamically consistent non-local thermodynamical equilibrium (non-LTE) stellar atmosphere models that allow for detailed studies of radiation-driven stellar winds. As a first demonstration, this new kind of model is applied to a massive O star. Methods: Based on earlier works, the PoWR code has been extended with the option to consistently solve the hydrodynamic equation together with the statistical equations and the radiative transfer in order to obtain a hydrodynamically consistent atmosphere stratification. In these models, the whole velocity field is iteratively updated together with an adjustment of the mass-loss rate. Results: The concepts for obtaining hydrodynamically consistent models using a comoving-frame radiative transfer are outlined. To provide a useful benchmark, we present a demonstration model, which was motivated to describe the well-studied O4 supergiant ζPup. The obtained stellar and wind parameters are within the current range of literature values. Conclusions: For the first time, the PoWR code has been used to obtain a hydrodynamically consistent model for a massive O star. This has been achieved by a profound revision of earlier concepts used for Wolf-Rayet stars. The velocity field is shaped by various elements contributing to the radiative acceleration, especially in the outer wind. The results further indicate that for more dense winds deviations from a standard β-law occur.

  17. Communications and information research: Improved space link performance via concatenated forward error correction coding

    NASA Technical Reports Server (NTRS)

    Rao, T. R. N.; Seetharaman, G.; Feng, G. L.

    1996-01-01

    With the development of new advanced instruments for remote sensing applications, sensor data will be generated at a rate that not only requires increased onboard processing and storage capability, but imposes demands on the space to ground communication link and ground data management-communication system. Data compression and error control codes provide viable means to alleviate these demands. Two types of data compression have been studied by many researchers in the area of information theory: a lossless technique that guarantees full reconstruction of the data, and a lossy technique which generally gives higher data compaction ratio but incurs some distortion in the reconstructed data. To satisfy the many science disciplines which NASA supports, lossless data compression becomes a primary focus for the technology development. While transmitting the data obtained by any lossless data compression, it is very important to use some error-control code. For a long time, convolutional codes have been widely used in satellite telecommunications. To more efficiently transform the data obtained by the Rice algorithm, it is required to meet the a posteriori probability (APP) for each decoded bit. A relevant algorithm for this purpose has been proposed which minimizes the bit error probability in the decoding linear block and convolutional codes and meets the APP for each decoded bit. However, recent results on iterative decoding of 'Turbo codes', turn conventional wisdom on its head and suggest fundamentally new techniques. During the past several months of this research, the following approaches have been developed: (1) a new lossless data compression algorithm, which is much better than the extended Rice algorithm for various types of sensor data, (2) a new approach to determine the generalized Hamming weights of the algebraic-geometric codes defined by a large class of curves in high-dimensional spaces, (3) some efficient improved geometric Goppa codes for disk memory systems and high-speed mass memory systems, and (4) a tree based approach for data compression using dynamic programming.

  18. An efficient coding algorithm for the compression of ECG signals using the wavelet transform.

    PubMed

    Rajoub, Bashar A

    2002-04-01

    A wavelet-based electrocardiogram (ECG) data compression algorithm is proposed in this paper. The ECG signal is first preprocessed, the discrete wavelet transform (DWT) is then applied to the preprocessed signal. Preprocessing guarantees that the magnitudes of the wavelet coefficients be less than one, and reduces the reconstruction errors near both ends of the compressed signal. The DWT coefficients are divided into three groups, each group is thresholded using a threshold based on a desired energy packing efficiency. A binary significance map is then generated by scanning the wavelet decomposition coefficients and outputting a binary one if the scanned coefficient is significant, and a binary zero if it is insignificant. Compression is achieved by 1) using a variable length code based on run length encoding to compress the significance map and 2) using direct binary representation for representing the significant coefficients. The ability of the coding algorithm to compress ECG signals is investigated, the results were obtained by compressing and decompressing the test signals. The proposed algorithm is compared with direct-based and wavelet-based compression algorithms and showed superior performance. A compression ratio of 24:1 was achieved for MIT-BIH record 117 with a percent root mean square difference as low as 1.08%.

  19. Audiovisual focus of attention and its application to Ultra High Definition video compression

    NASA Astrophysics Data System (ADS)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  20. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  1. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    DTIC Science & Technology

    2007-04-01

    Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression

  2. Adaptive coding of MSS imagery. [Multi Spectral band Scanners

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.

    1977-01-01

    A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.

  3. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  4. Hydrodynamic evolution of plasma waveguides for soft-x-ray amplifiers

    NASA Astrophysics Data System (ADS)

    Oliva, Eduardo; Depresseux, Adrien; Cotelo, Manuel; Lifschitz, Agustín; Tissandier, Fabien; Gautier, Julien; Maynard, Gilles; Velarde, Pedro; Sebban, Stéphane

    2018-02-01

    High-density, collisionally pumped plasma-based soft-x-ray lasers have recently delivered hundreds of femtosecond pulses, breaking the longstanding barrier of one picosecond. To pump these amplifiers an intense infrared pulse must propagate focused throughout all the length of the amplifier, which spans several Rayleigh lengths. However, strong nonlinear effects hinder the propagation of the laser beam. The use of a plasma waveguide allows us to overcome these drawbacks provided the hydrodynamic processes that dominate the creation and posterior evolution of the waveguide are controlled and optimized. In this paper we present experimental measurements of the radial density profile and transmittance of such waveguide, and we compare them with numerical calculations using hydrodynamic and particle-in-cell codes. Controlling the properties (electron density value and radial gradient) of the waveguide with the help of numerical codes promises the delivery of ultrashort (tens of femtoseconds), coherent soft-x-ray pulses.

  5. Hydrodynamic models of a cepheid atmosphere. Ph.D. Thesis - Maryland Univ., College Park

    NASA Technical Reports Server (NTRS)

    Karp, A. H.

    1974-01-01

    A method for including the solution of the transfer equation in a standard Henyey type hydrodynamic code was developed. This modified Henyey method was used in an implicit hydrodynamic code to compute deep envelope models of a classical Cepheid with a period of 12(d) including radiative transfer effects in the optically thin zones. It was found that the velocity gradients in the atmosphere are not responsible for the large microturbulent velocities observed in Cepheids but may be responsible for the occurrence of supersonic microturbulence. It was found that the splitting of the cores of the strong lines is due to shock induced temperature inversions in the line forming region. The adopted light, color, and velocity curves were used to study three methods frequently used to determine the mean radii of Cepheids. It is concluded that an accuracy of 10% is possible only if high quality observations are used.

  6. Validation of Hydrodynamic Load Models Using CFD for the OC4-DeepCwind Semisubmersible: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitz, M. A.; Schmidt, D. P.; Lackner, M. A.

    Computational fluid dynamics (CFD) simulations were carried out on the OC4-DeepCwind semi-submersible to obtain a better understanding of how to set hydrodynamic coefficients for the structure when using an engineering tool such as FAST to model the system. The focus here was on the drag behavior and the effects of the free-surface, free-ends and multi-member arrangement of the semi-submersible structure. These effects are investigated through code-to-code comparisons and flow visualizations. The implications on mean load predictions from engineering tools are addressed. The work presented here suggests that selection of drag coefficients should take into consideration a variety of geometric factors.more » Furthermore, CFD simulations demonstrate large time-varying loads due to vortex shedding, which FAST's hydrodynamic module, HydroDyn, does not model. The implications of these oscillatory loads on the fatigue life needs to be addressed.« less

  7. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  8. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained.

  9. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  10. First-principles equation-of-state table of beryllium based on density-functional theory calculations

    DOE PAGES

    Ding, Y. H.; Hu, S. X.

    2017-06-06

    Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less

  11. First-principles equation-of-state table of beryllium based on density-functional theory calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Y. H.; Hu, S. X.

    Beryllium has been considered a superior ablator material for inertial confinement fusion (ICF) target designs. An accurate equation-of-state (EOS) of beryllium under extreme conditions is essential for reliable ICF designs. Based on density-functional theory (DFT) calculations, we have established a wide-range beryllium EOS table of density ρ = 0.001 to 500 g/cm 3 and temperature T = 2000 to 10 8 K. Our first-principle equation-of-state (FPEOS) table is in better agreement with the widely used SESAME EOS table (SESAME 2023) than the average-atom INFERNO and Purgatorio models. For the principal Hugoniot, our FPEOS prediction shows ~10% stiffer than the lastmore » two models in the maximum compression. Although the existing experimental data (only up to 17 Mbar) cannot distinguish these EOS models, we anticipate that high-pressure experiments at the maximum compression region should differentiate our FPEOS from INFERNO and Purgatorio models. Comparisons between FPEOS and SESAME EOS for off-Hugoniot conditions show that the differences in the pressure and internal energy are within ~20%. By implementing the FPEOS table into the 1-D radiation–hydrodynamic code LILAC, we studied in this paper the EOS effects on beryllium-shell–target implosions. Finally, the FPEOS simulation predicts higher neutron yield (~15%) compared to the simulation using the SESAME 2023 EOS table.« less

  12. Extreme Physics

    NASA Astrophysics Data System (ADS)

    Colvin, Jeff; Larsen, Jon

    2013-11-01

    Acknowledgements; 1. Extreme environments: what, where, how; 2. Properties of dense and classical plasmas; 3. Laser energy absorption in matter; 4. Hydrodynamic motion; 5. Shocks; 6. Equation of state; 7. Ionization; 8. Thermal energy transport; 9. Radiation energy transport; 10. Magnetohydrodynamics; 11. Considerations for constructing radiation-hydrodynamics computer codes; 12. Numerical simulations; Appendix: units and constants, glossary of symbols; References; Bibliography; Index.

  13. Code Compression for DSP

    DTIC Science & Technology

    1998-12-01

    PAGES 6 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8...Automation Conference, June 1998. [Liao95] S. Liao, S. Devadas , K. Keutzer, “Code Density Optimization for Embedded DSP Processors Using Data Compression

  14. Computational modeling of joint U.S.-Russian experiments relevant to magnetic compression/magnetized target fusion (MAGO/MTF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.

    1997-12-31

    Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less

  15. Locally adaptive vector quantization: Data compression with feature preservation

    NASA Technical Reports Server (NTRS)

    Cheung, K. M.; Sayano, M.

    1992-01-01

    A study of a locally adaptive vector quantization (LAVQ) algorithm for data compression is presented. This algorithm provides high-speed one-pass compression and is fully adaptable to any data source and does not require a priori knowledge of the source statistics. Therefore, LAVQ is a universal data compression algorithm. The basic algorithm and several modifications to improve performance are discussed. These modifications are nonlinear quantization, coarse quantization of the codebook, and lossless compression of the output. Performance of LAVQ on various images using irreversible (lossy) coding is comparable to that of the Linde-Buzo-Gray algorithm, but LAVQ has a much higher speed; thus this algorithm has potential for real-time video compression. Unlike most other image compression algorithms, LAVQ preserves fine detail in images. LAVQ's performance as a lossless data compression algorithm is comparable to that of Lempel-Ziv-based algorithms, but LAVQ uses far less memory during the coding process.

  16. Unified approach for incompressible flows

    NASA Astrophysics Data System (ADS)

    Chang, Tyne-Hsien

    1993-12-01

    An unified approach for solving both compressible and incompressible flows was investigated in this study. The difference in CFD code development between incompressible and compressible flows is due to the mathematical characteristics. However, if one can modify the continuity equation for incompressible flows by introducing pseudocompressibility, the governing equations for incompressible flows would have the same mathematical characters as compressible flows. The application of a compressible flow code to solve incompressible flows becomes feasible. Among numerical algorithms developed for compressible flows, the Centered Total Variation Diminishing (CTVD) schemes possess better mathematical properties to damp out the spurious oscillations while providing high-order accuracy for high speed flows. It leads us to believe that CTVD schemes can equally well solve incompressible flows. In this study, the governing equations for incompressible flows include the continuity equation and momentum equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. The continuity equation is modified by adding a time-derivative of the pressure term containing the artificial compressibility. The modified continuity equation together with the unsteady momentum equations forms a hyperbolic-parabolic type of time-dependent system of equations. Thus, the CTVD schemes can be implemented. In addition, the boundary conditions including physical and numerical boundary conditions must be properly specified to obtain accurate solution. The CFD code for this research is currently in progress. Flow past a circular cylinder will be used for numerical experiments to determine the accuracy and efficiency of the code before applying this code to more specific applications.

  17. Self consistent hydrodynamic description of the plasma wake field excitation induced by a relativistic charged-particle beam in an unmagnetized plasma

    NASA Astrophysics Data System (ADS)

    Jovanović, Dušan; Fedele, Renato; De Nicola, Sergio; Akhter, Tamina; Belić, Milivoj

    2017-12-01

    A self-consistent nonlinear hydrodynamic theory is presented of the propagation of a long and thin relativistic electron beam, for a typical plasma wake field acceleration configuration in an unmagnetized and overdense plasma. The random component of the trajectories of the beam particles as well as of their velocity spread is modelled by an anisotropic temperature, allowing the beam dynamics to be approximated as a 3D adiabatic expansion/compression. It is shown that even in the absence of the nonlinear plasma wake force, the localisation of the beam in the transverse direction can be achieved owing to the nonlinearity associated with the adiabatic compression/rarefaction and a coherent stationary state is constructed. Numerical calculations reveal the possibility of the beam focussing and defocussing, but the lifetime of the beam can be significantly extended by the appropriate adjustments, so that transverse oscillations are observed, similar to those predicted within the thermal wave and Vlasov kinetic models.

  18. Gas stripping and mixing in galaxy clusters: a numerical comparison study

    NASA Astrophysics Data System (ADS)

    Heß, Steffen; Springel, Volker

    2012-11-01

    The ambient hot intrahalo gas in clusters of galaxies is constantly fed and stirred by infalling galaxies, a process that can be studied in detail with cosmological hydrodynamical simulations. However, different numerical methods yield discrepant predictions for crucial hydrodynamical processes, leading for example to different entropy profiles in clusters of galaxies. In particular, the widely used Lagrangian smoothed particle hydrodynamics (SPH) scheme is suspected to strongly damp fluid instabilities and turbulence, which are both crucial to establish the thermodynamic structure of clusters. In this study, we test to which extent our recently developed Voronoi particle hydrodynamics (VPH) scheme yields different results for the stripping of gas out of infalling galaxies and for the bulk gas properties of cluster. We consider both the evolution of isolated galaxy models that are exposed to a stream of intracluster medium or are dropped into cluster models, as well as non-radiative cosmological simulations of cluster formation. We also compare our particle-based method with results obtained with a fundamentally different discretization approach as implemented in the moving-mesh code AREPO. We find that VPH leads to noticeably faster stripping of gas out of galaxies than SPH, in better agreement with the mesh-code than with SPH. We show that despite the fact that VPH in its present form is not as accurate as the moving mesh code in our investigated cases, its improved accuracy of gradient estimates makes VPH an attractive alternative to SPH.

  19. A comparison of cosmological hydrodynamic codes

    NASA Technical Reports Server (NTRS)

    Kang, Hyesung; Ostriker, Jeremiah P.; Cen, Renyue; Ryu, Dongsu; Hernquist, Lars; Evrard, August E.; Bryan, Greg L.; Norman, Michael L.

    1994-01-01

    We present a detailed comparison of the simulation results of various hydrodynamic codes. Starting with identical initial conditions based on the cold dark matter scenario for the growth of structure, with parameters h = 0.5 Omega = Omega(sub b) = 1, and sigma(sub 8) = 1, we integrate from redshift z = 20 to z = O to determine the physical state within a representative volume of size L(exp 3) where L = 64 h(exp -1) Mpc. Five indenpendent codes are compared: three of them Eulerian mesh-based and two variants of the smooth particle hydrodynamics 'SPH' Lagrangian approach. The Eulerian codes were run at N(exp 3) = (32(exp 3), 64(exp 3), 128(exp 3), and 256(exp 3)) cells, the SPH codes at N(exp 3) = 32(exp 3) and 64(exp 3) particles. Results were then rebinned to a 16(exp 3) grid with the exception that the rebinned data should converge, by all techniques, to a common and correct result as N approaches infinity. We find that global averages of various physical quantities do, as expected, tend to converge in the rebinned model, but that uncertainites in even primitive quantities such as (T), (rho(exp 2))(exp 1/2) persists at the 3%-17% level achieve comparable and satisfactory accuracy for comparable computer time in their treatment of the high-density, high-temeprature regions as measured in the rebinned data; the variance among the five codes (at highest resolution) for the mean temperature (as weighted by rho(exp 2) is only 4.5%. Examined at high resolution we suspect that the density resolution is better in the SPH codes and the thermal accuracy in low-density regions better in the Eulerian codes. In the low-density, low-temperature regions the SPH codes have poor accuracy due to statiscal effects, and the Jameson code gives the temperatures which are too high, due to overuse of artificial viscosity in these high Mach number regions. Overall the comparison allows us to better estimate errors; it points to ways of improving this current generation ofhydrodynamic codes and of suiting their use to problems which exploit their best individual features.

  20. 4800 B/S speech compression techniques for mobile satellite systems

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Barnwell, T. P., III; Rose, R. C.; Gersho, A.; Davidson, G.

    1986-01-01

    This paper will discuss three 4800 bps digital speech compression techniques currently being investigated for application in the mobile satellite service. These three techniques, vector adaptive predictive coding, vector excitation coding, and the self excited vocoder, are the most promising among a number of techniques being developed to possibly provide near-toll-quality speech compression while still keeping the bit-rate low enough for a power and bandwidth limited satellite service.

  1. Joint Services Electronics Program Annual Progress Report.

    DTIC Science & Technology

    1985-11-01

    one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB

  2. Nanoparticle Brownian motion and hydrodynamic interactions in the presence of flow fields

    PubMed Central

    Uma, B.; Swaminathan, T. N.; Radhakrishnan, R.; Eckmann, D. M.; Ayyaswamy, P. S.

    2011-01-01

    We consider the Brownian motion of a nanoparticle in an incompressible Newtonian fluid medium (quiescent or fully developed Poiseuille flow) with the fluctuating hydrodynamics approach. The formalism considers situations where both the Brownian motion and the hydrodynamic interactions are important. The flow results have been modified to account for compressibility effects. Different nanoparticle sizes and nearly neutrally buoyant particle densities are also considered. Tracked particles are initially located at various distances from the bounding wall to delineate wall effects. The results for thermal equilibrium are validated by comparing the predictions for the temperatures of the particle with those obtained from the equipartition theorem. The nature of the hydrodynamic interactions is verified by comparing the velocity autocorrelation functions and mean square displacements with analytical and experimental results where available. The equipartition theorem for a Brownian particle in Poiseuille flow is verified for a range of low Reynolds numbers. Numerical predictions of wall interactions with the particle in terms of particle diffusivities are consistent with results, where available. PMID:21918592

  3. Inferring Strength of Tantalum from Hydrodynamic Instability Recovery Experiments

    NASA Astrophysics Data System (ADS)

    Sternberger, Z.; Maddox, B.; Opachich, Y.; Wehrenberg, C.; Kraus, R.; Remington, B.; Randall, G.; Farrell, M.; Ravichandran, G.

    2018-05-01

    Hydrodynamic instability experiments allow access to material properties at extreme conditions, where strain rates exceed 105 s-1 and pressures reach 100 GPa. Current hydrodynamic instability experimental methods require in-flight radiography to image the instability growth at high pressure and high strain rate, limiting the facilities where these experiments can be performed. An alternate approach, recovering the sample after loading, allows measurement of the instability growth with profilometry. Tantalum samples were manufactured with different 2D and 3D initial perturbation patterns and dynamically compressed by a blast wave generated by laser ablation. The samples were recovered from peak pressures between 30 and 120 GPa and strain rates on the order of 107 s-1, providing a record of the growth of the perturbations due to hydrodynamic instability. These records are useful validation points for hydrocode simulations using models of material strength at high strain rate. Recovered tantalum samples were analyzed, providing an estimate of the strength of the material at high pressure and strain rate.

  4. Binary image encryption in a joint transform correlator scheme by aid of run-length encoding and QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Wang, Zhipeng; Wang, Hongjuan; Gong, Qiong

    2018-07-01

    We propose a binary image encryption method in joint transform correlator (JTC) by aid of the run-length encoding (RLE) and Quick Response (QR) code, which enables lossless retrieval of the primary image. The binary image is encoded with RLE to obtain the highly compressed data, and then the compressed binary image is further scrambled using a chaos-based method. The compressed and scrambled binary image is then transformed into one QR code that will be finally encrypted in JTC. The proposed method successfully, for the first time to our best knowledge, encodes a binary image into a QR code with the identical size of it, and therefore may probe a new way for extending the application of QR code in optical security. Moreover, the preprocessing operations, including RLE, chaos scrambling and the QR code translation, append an additional security level on JTC. We present digital results that confirm our approach.

  5. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-09

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system.

  6. File compression and encryption based on LLS and arithmetic coding

    NASA Astrophysics Data System (ADS)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  7. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  8. RICH: OPEN-SOURCE HYDRODYNAMIC SIMULATION ON A MOVING VORONOI MESH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yalinewich, Almog; Steinberg, Elad; Sari, Re’em

    2015-02-01

    We present here RICH, a state-of-the-art two-dimensional hydrodynamic code based on Godunov’s method, on an unstructured moving mesh (the acronym stands for Racah Institute Computational Hydrodynamics). This code is largely based on the code AREPO. It differs from AREPO in the interpolation and time-advancement schemeS as well as a novel parallelization scheme based on Voronoi tessellation. Using our code, we study the pros and cons of a moving mesh (in comparison to a static mesh). We also compare its accuracy to other codes. Specifically, we show that our implementation of external sources and time-advancement scheme is more accurate and robustmore » than is AREPO when the mesh is allowed to move. We performed a parameter study of the cell rounding mechanism (Lloyd iterations) and its effects. We find that in most cases a moving mesh gives better results than a static mesh, but it is not universally true. In the case where matter moves in one way and a sound wave is traveling in the other way (such that relative to the grid the wave is not moving) a static mesh gives better results than a moving mesh. We perform an analytic analysis for finite difference schemes that reveals that a Lagrangian simulation is better than a Eulerian simulation in the case of a highly supersonic flow. Moreover, we show that Voronoi-based moving mesh schemes suffer from an error, which is resolution independent, due to inconsistencies between the flux calculation and the change in the area of a cell. Our code is publicly available as open source and designed in an object-oriented, user-friendly way that facilitates incorporation of new algorithms and physical processes.« less

  9. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  10. A test data compression scheme based on irrational numbers stored coding.

    PubMed

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  11. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  12. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  13. EFDC1D - A ONE DIMENSIONAL HYDRODYNAMIC AND SEDIMENT TRANSPORT MODEL FOR RIVER AND STREAM NETWORKS: MODEL THEORY AND USERS GUIDE

    EPA Science Inventory

    This technical report describes the new one-dimensional (1D) hydrodynamic and sediment transport model EFDC1D. This model that can be applied to stream networks. The model code and two sample data sets are included on the distribution CD. EFDC1D can simulate bi-directional unstea...

  14. An Exact Integration Scheme for Radiative Cooling in Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Townsend, R. H. D.

    2009-04-01

    A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semidiscrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.

  15. Convolutional coding techniques for data protection

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  16. Improved EOS for describing high-temperature off-hugoniot states in epoxy

    NASA Astrophysics Data System (ADS)

    Mulford, R. N.; Lanier, N. E.; Swift, D.; Workman, J.; Graham, Peter; Moore, Alastair

    2007-06-01

    Modeling of off-hugoniot states in an expanding interface subjected to a shock reveals the importance of a chemically complete description of the materials. Hydrodynamic experiments typically rely on pre-shot target characterization to predict how initial perturbations will affect the late-time hydrodynamic mixing. However, it is the condition of these perturbations at the time of shock arrival that dominates their eventual late-time evolution. In some cases these perturbations are heated prior to the arrival of the main shock. Correctly modeling how temperature and density gradients will develop in the pre-heated material requires an understanding of the equation-of-state. In the experiment modelled, an epoxy/foam layered package was subjected to tin L-shell radiation, producing an expanding assembly at a well-defined temperature. This assembly was then subjected to a controlled shock, and the evolution of the epoxy-foam interface imaged with x-ray radiography. Modeling of the data with the hydrodynamics code RAGE is unsuccessful under certain shock conditions, unless condensation of chemical species from the plasma is explicitly included. The EOS code CHEETAH was used to prepare suitable EOS for input into the hydrodynamics modeling.

  17. Improved EOS for Describing High-Temperature Off-Hugoniot States in Epoxy

    NASA Astrophysics Data System (ADS)

    Mulford, R. N.; Swift, D. C.; Lanier, N. E.; Workman, J.; Holmes, R. L.; Graham, P.; Moore, A.

    2007-12-01

    Modelling of off-Hugoniot states in an expanding interface subjected to a shock reveals the importance of a chemically complete description of the materials. Hydrodynamic experiments typically rely on pre-shot target characterization to predict how initial perturbations will affect the late-time hydrodynamic mixing. However, it is the condition of these perturbations at the time of shock arrival that dominates their eventual late-time evolution. In some cases these perturbations are heated prior to the arrival of the main shock. Correctly modelling how temperature and density gradients will develop in the pre-heated material requires an understanding of the equation-of-state. In the experiment modelled, an epoxy/foam layered package was subjected to tin L-shell radiation, producing an expanding assembly at a well-defined temperature. This assembly was then subjected to a controlled shock, and the evolution of the epoxy-foam interface imaged with x-ray radiography. Modelling of the data with the hydrodynamics code RAGE was unsuccessful under certain shock conditions, unless condensation of chemical species from the plasma is explicitly included. The EOS code Cheetah was used to prepare suitable EOS for input into the hydrodynamics modelling.

  18. Evaluation of Subgrid-Scale Models for Large Eddy Simulation of Compressible Flows

    NASA Technical Reports Server (NTRS)

    Blaisdell, Gregory A.

    1996-01-01

    The objective of this project was to evaluate and develop subgrid-scale (SGS) turbulence models for large eddy simulations (LES) of compressible flows. During the first phase of the project results from LES using the dynamic SGS model were compared to those of direct numerical simulations (DNS) of compressible homogeneous turbulence. The second phase of the project involved implementing the dynamic SGS model in a NASA code for simulating supersonic flow over a flat-plate. The model has been successfully coded and a series of simulations has been completed. One of the major findings of the work is that numerical errors associated with the finite differencing scheme used in the code can overwhelm the SGS model and adversely affect the LES results. Attached to this overview are three submitted papers: 'Evaluation of the Dynamic Model for Simulations of Compressible Decaying Isotropic Turbulence'; 'The effect of the formulation of nonlinear terms on aliasing errors in spectral methods'; and 'Large-Eddy Simulation of a Spatially Evolving Compressible Boundary Layer Flow'.

  19. Environmental Fluid Dynamics Code

    EPA Science Inventory

    The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...

  20. Effect of compressibility on the hypervelocity penetration

    NASA Astrophysics Data System (ADS)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  1. Numerical Investigation of Magnetically Driven Isentropic Compression of Solid Aluminum Cylinders with a Semi-Analytical Code

    NASA Astrophysics Data System (ADS)

    Largent, Billy T.

    The state of matter at extremely high pressures and densities is of fundamental interest to many branches of research, including planetary science, material science, condensed matter physics, and plasma physics. Matter with pressures, or energy densities, above 1 megabar (100 gigapascal) are defined as High Energy Density (HED) plasmas. They are directly relevant to the interiors of planets such as Earth and Jupiter and to the dense fuels in Inertial Confinement Fusion (ICF) experiments. To create HEDP conditions in laboratories, a sample may be compressed by a smoothly varying pressure ramp with minimal temperature increase, following the isentropic thermodynamic process. Isentropic compression of aluminum targets has been done using magnetic pressure produced by megaampere, pulsed power currents having 100 ns rise times. In this research project, magnetically driven, cylindrical isentropic compression has been numerically studied. In cylindrical geometry, material compression and pressure become higher than in planar geometry due to geometrical effects. Based on a semi-analytical model for the Magnetized Liner Inertial Fusion (MagLIF) concept, a code called "SA" was written to design cylindrical compression experiments on the 1.0 MA Zebra pulsed power generator at the Nevada Terawatt Facility (NTF). To test the physics models in the code, temporal progresses of rod compression and pressure were calculated with SA and compared with 1-D magnetohydrodynamic (MHD) codes. The MHD codes incorporated SESAME tables, for equation of state and resistivity, or the classical Spitzer model. A series of simulations were also run to find optimum rod diameters for 1.0 MA and 1.8 MA Zebra current pulses. For a 1.0 MA current peak and 95 ns rise time, a maximum compression of 2.35 ( 6.3 g/cm3) and a pressure of 900 GPa within a 100 mum radius were found for an initial diameter of 1.05 mm. For 1.8 MA peak simulations with the same rise time, the initial diameter of 1.3 mm was optimal with 3.32 ( 9.0 g/cm 3) compression.

  2. Shaped Charge Jet Penetration of Discontinuous Media

    DTIC Science & Technology

    1977-07-01

    operational at the Ballistic1Research Laboratory. These codes are OIL, 1 TOIL, 2 DORF, 3 and HELP,4 ,5 which are Eulerian formulated, and HEMP ,6 which...ELastic Plastic ) is a FORTRAN code developed by Systems, Science and Software, Inc. It evolved from three major hydrodynamic codes previously developed...introduced into the treatment of moving surfaces. The HELP code, using the von Mises yield condition, treats materials as being elastic- plastic . The input for

  3. Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, S R; Bihari, B L; Salari, K

    As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.

  4. Microsecond ramp compression of a metallic liner driven by a 5 MA current on the SPHINX machine using a dynamic load current multiplier pulse shaping

    NASA Astrophysics Data System (ADS)

    d'Almeida, T.; Lassalle, F.; Morell, A.; Grunenwald, J.; Zucchini, F.; Loyen, A.; Maysonnave, T.; Chuvatin, A. S.

    2013-09-01

    SPHINX is a 6 MA, 1-μs Linear Transformer Driver (LTD) operated by the CEA Gramat (France) and primarily used for imploding Z-pinch loads for radiation effects studies. Among the options that are currently being evaluated to improve the generator performances are an upgrade to a 20 MA, 1-μs LTD machine and various power amplification schemes, including a compact Dynamic Load Current Multiplier (DLCM). A method for performing magnetic ramp compression experiments, without modifying the generator operation scheme, was developed using the DLCM to shape the initial current pulse in order to obtain the desired load current profile. In this paper, we discuss the overall configuration that was selected for these experiments, including the choice of a coaxial cylindrical geometry for the load and its return current electrode. We present both 3-D Magneto-hydrodynamic and 1D Lagrangian hydrodynamic simulations which helped guide the design of the experimental configuration. Initial results obtained over a set of experiments on an aluminium cylindrical liner, ramp-compressed to a peak pressure of 23 GPa, are presented and analyzed. Details of the electrical and laser Doppler interferometer setups used to monitor and diagnose the ramp compression experiments are provided. In particular, the configuration used to field both homodyne and heterodyne velocimetry diagnostics in the reduced access available within the liner's interior is described. Current profiles measured at various critical locations across the system, particularly the load current, enabled a comprehensive tracking of the current circulation and demonstrate adequate pulse shaping by the DLCM. The liner inner free surface velocity measurements obtained from the heterodyne velocimeter agree with the hydrocode results obtained using the measured load current as the input. An extensive hydrodynamic analysis is carried out to examine information such as pressure and particle velocity history profiles or magnetic diffusion across the liner. The potential of the technique in terms of applications and achievable ramp pressure levels lies in the prospects for improving the DLCM efficiency through the use of a closing switch (currently under development), reducing the load dimensions and optimizing the diagnostics.

  5. A review of lossless audio compression standards and algorithms

    NASA Astrophysics Data System (ADS)

    Muin, Fathiah Abdul; Gunawan, Teddy Surya; Kartiwi, Mira; Elsheikh, Elsheikh M. A.

    2017-09-01

    Over the years, lossless audio compression has gained popularity as researchers and businesses has become more aware of the need for better quality and higher storage demand. This paper will analyse various lossless audio coding algorithm and standards that are used and available in the market focusing on Linear Predictive Coding (LPC) specifically due to its popularity and robustness in audio compression, nevertheless other prediction methods are compared to verify this. Advanced representation of LPC such as LSP decomposition techniques are also discussed within this paper.

  6. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  7. DIAPHANE: A portable radiation transport library for astrophysical applications

    NASA Astrophysics Data System (ADS)

    Reed, Darren S.; Dykes, Tim; Cabezón, Rubén; Gheller, Claudio; Mayer, Lucio

    2018-05-01

    One of the most computationally demanding aspects of the hydrodynamical modelingof Astrophysical phenomena is the transport of energy by radiation or relativistic particles. Physical processes involving energy transport are ubiquitous and of capital importance in many scenarios ranging from planet formation to cosmic structure evolution, including explosive events like core collapse supernova or gamma-ray bursts. Moreover, the ability to model and hence understand these processes has often been limited by the approximations and incompleteness in the treatment of radiation and relativistic particles. The DIAPHANE project has focused on developing a portable and scalable library that handles the transport of radiation and particles (in particular neutrinos) independently of the underlying hydrodynamic code. In this work, we present the computational framework and the functionalities of the first version of the DIAPHANE library, which has been successfully ported to three different smoothed-particle hydrodynamic codes, GADGET2, GASOLINE and SPHYNX. We also present validation of different modules solving the equations of radiation and neutrino transport using different numerical schemes.

  8. Relativistic low angular momentum accretion: long time evolution of hydrodynamical inviscid flows

    NASA Astrophysics Data System (ADS)

    Mach, Patryk; Piróg, Michał; Font, José A.

    2018-05-01

    We investigate relativistic low angular momentum accretion of inviscid perfect fluid onto a Schwarzschild black hole. The simulations are performed with a general-relativistic, high-resolution (second-order), shock-capturing, hydrodynamical numerical code. We use horizon-penetrating Eddington–Finkelstein coordinates to remove inaccuracies in regions of strong gravity near the black hole horizon and show the expected convergence of the code with the Michel solution and stationary Fishbone–Moncrief toroids. We recover, in the framework of relativistic hydrodynamics, the qualitative behavior known from previous Newtonian studies that used a Bondi background flow in a pseudo-relativistic gravitational potential with a latitude-dependent angular momentum at the outer boundary. Our models exhibit characteristic ‘turbulent’ behavior and the attained accretion rates are lower than those of the Bondi–Michel radial flow. For sufficiently low values of the asymptotic sound speed, geometrically thick tori form in the equatorial plane surrounding the black hole horizon while accretion takes place mainly through the poles.

  9. A Study of Fan Stage/Casing Interaction Models

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Carney, Kelly; Gallardo, Vicente

    2003-01-01

    The purpose of the present study is to investigate the performance of several existing and new, blade-case interactions modeling capabilities that are compatible with the large system simulations used to capture structural response during blade-out events. Three contact models are examined for simulating the interactions between a rotor bladed disk and a case: a radial and linear gap element and a new element based on a hydrodynamic formulation. The first two models are currently available in commercial finite element codes such as NASTRAN and have been showed to perform adequately for simulating rotor-case interactions. The hydrodynamic model, although not readily available in commercial codes, may prove to be better able to characterize rotor-case interactions.

  10. Modeling of video traffic in packet networks, low rate video compression, and the development of a lossy+lossless image compression algorithm

    NASA Technical Reports Server (NTRS)

    Sayood, K.; Chen, Y. C.; Wang, X.

    1992-01-01

    During this reporting period we have worked on three somewhat different problems. These are modeling of video traffic in packet networks, low rate video compression, and the development of a lossy + lossless image compression algorithm, which might have some application in browsing algorithms. The lossy + lossless scheme is an extension of work previously done under this grant. It provides a simple technique for incorporating browsing capability. The low rate coding scheme is also a simple variation on the standard discrete cosine transform (DCT) coding approach. In spite of its simplicity, the approach provides surprisingly high quality reconstructions. The modeling approach is borrowed from the speech recognition literature, and seems to be promising in that it provides a simple way of obtaining an idea about the second order behavior of a particular coding scheme. Details about these are presented.

  11. Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique

    NASA Astrophysics Data System (ADS)

    Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi

    Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.

  12. Real-time transmission of digital video using variable-length coding

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1993-01-01

    Huffman coding is a variable-length lossless compression technique where data with a high probability of occurrence is represented with short codewords, while 'not-so-likely' data is assigned longer codewords. Compression is achieved when the high-probability levels occur so frequently that their benefit outweighs any penalty paid when a less likely input occurs. One instance where Huffman coding is extremely effective occurs when data is highly predictable and differential coding can be applied (as with a digital video signal). For that reason, it is desirable to apply this compression technique to digital video transmission; however, special care must be taken in order to implement a communication protocol utilizing Huffman coding. This paper addresses several of the issues relating to the real-time transmission of Huffman-coded digital video over a constant-rate serial channel. Topics discussed include data rate conversion (from variable to a fixed rate), efficient data buffering, channel coding, recovery from communication errors, decoder synchronization, and decoder architectures. A description of the hardware developed to execute Huffman coding and serial transmission is also included. Although this paper focuses on matters relating to Huffman-coded digital video, the techniques discussed can easily be generalized for a variety of applications which require transmission of variable-length data.

  13. Data compression for the microgravity experiments

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Whyte, Wayne A., Jr.; Anderson, Karen S.; Shalkhauser, Mary JO; Summers, Anne M.

    1989-01-01

    Researchers present the environment and conditions under which data compression is to be performed for the microgravity experiment. Also presented are some coding techniques that would be useful for coding in this environment. It should be emphasized that researchers are currently at the beginning of this program and the toolkit mentioned is far from complete.

  14. Compression of Index Term Dictionary in an Inverted-File-Oriented Database: Some Effective Algorithms.

    ERIC Educational Resources Information Center

    Wisniewski, Janusz L.

    1986-01-01

    Discussion of a new method of index term dictionary compression in an inverted-file-oriented database highlights a technique of word coding, which generates short fixed-length codes obtained from the index terms themselves by analysis of monogram and bigram statistical distributions. Substantial savings in communication channel utilization are…

  15. Neural network for image compression

    NASA Astrophysics Data System (ADS)

    Panchanathan, Sethuraman; Yeap, Tet H.; Pilache, B.

    1992-09-01

    In this paper, we propose a new scheme for image compression using neural networks. Image data compression deals with minimization of the amount of data required to represent an image while maintaining an acceptable quality. Several image compression techniques have been developed in recent years. We note that the coding performance of these techniques may be improved by employing adaptivity. Over the last few years neural network has emerged as an effective tool for solving a wide range of problems involving adaptivity and learning. A multilayer feed-forward neural network trained using the backward error propagation algorithm is used in many applications. However, this model is not suitable for image compression because of its poor coding performance. Recently, a self-organizing feature map (SOFM) algorithm has been proposed which yields a good coding performance. However, this algorithm requires a long training time because the network starts with random initial weights. In this paper we have used the backward error propagation algorithm (BEP) to quickly obtain the initial weights which are then used to speedup the training time required by the SOFM algorithm. The proposed approach (BEP-SOFM) combines the advantages of the two techniques and, hence, achieves a good coding performance in a shorter training time. Our simulation results demonstrate the potential gains using the proposed technique.

  16. First-principles thermal conductivity of warm-dense deuterium plasmas for inertial confinement fusion applications.

    PubMed

    Hu, S X; Collins, L A; Boehly, T R; Kress, J D; Goncharov, V N; Skupsky, S

    2014-04-01

    Thermal conductivity (κ) of both the ablator materials and deuterium-tritium (DT) fuel plays an important role in understanding and designing inertial confinement fusion (ICF) implosions. The extensively used Spitzer model for thermal conduction in ideal plasmas breaks down for high-density, low-temperature shells that are compressed by shocks and spherical convergence in imploding targets. A variety of thermal-conductivity models have been proposed for ICF hydrodynamic simulations of such coupled and degenerate plasmas. The accuracy of these κ models for DT plasmas has recently been tested against first-principles calculations using the quantum molecular-dynamics (QMD) method; although mainly for high densities (ρ > 100 g/cm3), large discrepancies in κ have been identified for the peak-compression conditions in ICF. To cover the wide range of density-temperature conditions undergone by ICF imploding fuel shells, we have performed QMD calculations of κ for a variety of deuterium densities of ρ = 1.0 to 673.518 g/cm3, at temperatures varying from T = 5 × 103 K to T = 8 × 106 K. The resulting κQMD of deuterium is fitted with a polynomial function of the coupling and degeneracy parameters Γ and θ, which can then be used in hydrodynamic simulation codes. Compared with the "hybrid" Spitzer-Lee-More model currently adopted in our hydrocode lilac, the hydrosimulations using the fitted κQMD have shown up to ∼20% variations in predicting target performance for different ICF implosions on OMEGA and direct-drive-ignition designs for the National Ignition Facility (NIF). The lower the adiabat of an imploding shell, the more variations in predicting target performance using κQMD. Moreover, the use of κQMD also modifies the shock conditions and the density-temperature profiles of the imploding shell at early implosion stage, which predominantly affects the final target performance. This is in contrast to the previous speculation that κQMD changes mainly the inside ablation process during the hot-spot formation of an ICF implosion.

  17. First-principles thermal conductivity of warm-dense deuterium plasmas for inertial confinement fusion applications

    NASA Astrophysics Data System (ADS)

    Hu, S. X.; Collins, L. A.; Boehly, T. R.; Kress, J. D.; Goncharov, V. N.; Skupsky, S.

    2014-04-01

    Thermal conductivity (κ) of both the ablator materials and deuterium-tritium (DT) fuel plays an important role in understanding and designing inertial confinement fusion (ICF) implosions. The extensively used Spitzer model for thermal conduction in ideal plasmas breaks down for high-density, low-temperature shells that are compressed by shocks and spherical convergence in imploding targets. A variety of thermal-conductivity models have been proposed for ICF hydrodynamic simulations of such coupled and degenerate plasmas. The accuracy of these κ models for DT plasmas has recently been tested against first-principles calculations using the quantum molecular-dynamics (QMD) method; although mainly for high densities (ρ > 100 g/cm3), large discrepancies in κ have been identified for the peak-compression conditions in ICF. To cover the wide range of density-temperature conditions undergone by ICF imploding fuel shells, we have performed QMD calculations of κ for a variety of deuterium densities of ρ = 1.0 to 673.518 g/cm3, at temperatures varying from T = 5 × 103 K to T = 8 × 106 K. The resulting κQMD of deuterium is fitted with a polynomial function of the coupling and degeneracy parameters Γ and θ, which can then be used in hydrodynamic simulation codes. Compared with the "hybrid" Spitzer-Lee-More model currently adopted in our hydrocode lilac, the hydrosimulations using the fitted κQMD have shown up to ˜20% variations in predicting target performance for different ICF implosions on OMEGA and direct-drive-ignition designs for the National Ignition Facility (NIF). The lower the adiabat of an imploding shell, the more variations in predicting target performance using κQMD. Moreover, the use of κQMD also modifies the shock conditions and the density-temperature profiles of the imploding shell at early implosion stage, which predominantly affects the final target performance. This is in contrast to the previous speculation that κQMD changes mainly the inside ablation process during the hot-spot formation of an ICF implosion.

  18. CHOLLA: A New Massively Parallel Hydrodynamics Code for Astrophysical Simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2015-04-01

    We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳2563) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.

  19. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction

    PubMed Central

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems. PMID:27814367

  20. Medical Ultrasound Video Coding with H.265/HEVC Based on ROI Extraction.

    PubMed

    Wu, Yueying; Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2016-01-01

    High-efficiency video compression technology is of primary importance to the storage and transmission of digital medical video in modern medical communication systems. To further improve the compression performance of medical ultrasound video, two innovative technologies based on diagnostic region-of-interest (ROI) extraction using the high efficiency video coding (H.265/HEVC) standard are presented in this paper. First, an effective ROI extraction algorithm based on image textural features is proposed to strengthen the applicability of ROI detection results in the H.265/HEVC quad-tree coding structure. Second, a hierarchical coding method based on transform coefficient adjustment and a quantization parameter (QP) selection process is designed to implement the otherness encoding for ROIs and non-ROIs. Experimental results demonstrate that the proposed optimization strategy significantly improves the coding performance by achieving a BD-BR reduction of 13.52% and a BD-PSNR gain of 1.16 dB on average compared to H.265/HEVC (HM15.0). The proposed medical video coding algorithm is expected to satisfy low bit-rate compression requirements for modern medical communication systems.

  1. Magnetic resonance image compression using scalar-vector quantization

    NASA Astrophysics Data System (ADS)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  2. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  3. A simple model for molecular hydrogen chemistry coupled to radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Nickerson, Sarah; Teyssier, Romain; Rosdahl, Joakim

    2018-06-01

    We introduce non-equilibrium molecular hydrogen chemistry into the radiation-hydrodynamics code RAMSES-RT. This is an adaptive mesh refinement grid code with radiation hydrodynamics that couples the thermal chemistry of hydrogen and helium to moment-based radiative transfer with the Eddington tensor closure model. The H2 physics that we include are formation on dust grains, gas phase formation, formation by three-body collisions, collisional destruction, photodissociation, photoionisation, cosmic ray ionisation and self-shielding. In particular, we implement the first model for H2 self-shielding that is tied locally to moment-based radiative transfer by enhancing photo-destruction. This self-shielding from Lyman-Werner line overlap is critical to H2 formation and gas cooling. We can now track the non-equilibrium evolution of molecular, atomic, and ionised hydrogen species with their corresponding dissociating and ionising photon groups. Over a series of tests we show that our model works well compared to specialised photodissociation region codes. We successfully reproduce the transition depth between molecular and atomic hydrogen, molecular cooling of the gas, and a realistic Strömgren sphere embedded in a molecular medium. In this paper we focus on test cases to demonstrate the validity of our model on small scales. Our ultimate goal is to implement this in large-scale galactic simulations.

  4. Autosophy information theory provides lossless data and video compression based on the data content

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.; Holtz, Eric S.; Holtz, Diana

    1996-09-01

    A new autosophy information theory provides an alternative to the classical Shannon information theory. Using the new theory in communication networks provides both a high degree of lossless compression and virtually unbreakable encryption codes for network security. The bandwidth in a conventional Shannon communication is determined only by the data volume and the hardware parameters, such as image size; resolution; or frame rates in television. The data content, or what is shown on the screen, is irrelevant. In contrast, the bandwidth in autosophy communication is determined only by data content, such as novelty and movement in television images. It is the data volume and hardware parameters that become irrelevant. Basically, the new communication methods use prior 'knowledge' of the data, stored in a library, to encode subsequent transmissions. The more 'knowledge' stored in the libraries, the higher the potential compression ratio. 'Information' is redefined as that which is not already known by the receiver. Everything already known is redundant and need not be re-transmitted. In a perfect communication each transmission code, called a 'tip,' creates a new 'engram' of knowledge in the library in which each tip transmission can represent any amount of data. Autosophy theories provide six separate learning modes, or omni dimensional networks, all of which can be used for data compression. The new information theory reveals the theoretical flaws of other data compression methods, including: the Huffman; Ziv Lempel; LZW codes and commercial compression codes such as V.42bis and MPEG-2.

  5. Coupling hydrodynamic and wave propagation modeling for waveform modeling of SPE.

    NASA Astrophysics Data System (ADS)

    Larmat, C. S.; Steedman, D. W.; Rougier, E.; Delorey, A.; Bradley, C. R.

    2015-12-01

    The goal of the Source Physics Experiment (SPE) is to bring empirical and theoretical advances to the problem of detection and identification of underground nuclear explosions. This paper presents effort to improve knowledge of the processes that affect seismic wave propagation from the hydrodynamic/plastic source region to the elastic/anelastic far field thanks to numerical modeling. The challenge is to couple the prompt processes that take place in the near source region to the ones taking place later in time due to wave propagation in complex 3D geologic environments. In this paper, we report on results of first-principles simulations coupling hydrodynamic simulation codes (Abaqus and CASH), with a 3D full waveform propagation code, SPECFEM3D. Abaqus and CASH model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. LANL has been recently employing a Coupled Euler-Lagrange (CEL) modeling capability. This has allowed the testing of a new phenomenological model for modeling stored shear energy in jointed material. This unique modeling capability has enabled highfidelity modeling of the explosive, the weak grout-filled borehole, as well as the surrounding jointed rock. SPECFEM3D is based on the Spectral Element Method, a direct numerical method for full waveform modeling with mathematical accuracy (e.g. Komatitsch, 1998, 2002) thanks to its use of the weak formulation of the wave equation and of high-order polynomial functions. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. Displacement time series at these points are computed from output of CASH or Abaqus (by interpolation if needed) and fed into the time marching scheme of SPECFEM3D. We will present validation tests and waveforms modeled for several SPE tests conducted so far, with a special focus on effect of the local topography.

  6. Computation of Thermally Perfect Compressible Flow Properties

    NASA Technical Reports Server (NTRS)

    Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake

    1996-01-01

    A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.

  7. Al 1s-2p absorption spectroscopy of shock-wave heating and compression in laser-driven planar foil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, H.; Regan, S. P.; Radha, P. B.

    Time-resolved Al 1s-2p absorption spectroscopy is used to diagnose direct-drive, shock-wave heating and compression of planar targets having nearly Fermi-degenerate plasma conditions (T{sub e}{approx}10-40 eV, {rho}{approx}3-11 g/cm{sup 3}) on the OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. A planar plastic foil with a buried Al tracer layer was irradiated with peak intensities of 10{sup 14}-10{sup 15} W/cm{sup 2} and probed with the pseudocontinuum M-band emission from a point-source Sm backlighter in the range of 1.4-1.7 keV. The laser ablation process launches 10-70 Mbar shock waves into the CH/Al/CH target. The Al 1s-2p absorption spectramore » were analyzed using the atomic physic code PRISMSPECT to infer T{sub e} and {rho} in the Al layer, assuming uniform plasma conditions during shock-wave heating, and to determine when the heat front penetrated the Al layer. The drive foils were simulated with the one-dimensional hydrodynamics code LILAC using a flux-limited (f=0.06 and f=0.1) and nonlocal thermal-transport model [V. N. Goncharov et al., Phys. Plasmas 13, 012702 (2006)]. The predictions of simulated shock-wave heating and the timing of heat-front penetration are compared to the observations. The experimental results for a wide variety of laser-drive conditions and buried depths have shown that the LILAC predictions using f=0.06 and the nonlocal model accurately model the shock-wave heating and timing of the heat-front penetration while the shock is transiting the target. The observed discrepancy between the measured and simulated shock-wave heating at late times of the drive can be explained by the reduced radiative heating due to lateral heat flow in the corona.« less

  8. Al 1s-2p Absorption Spectroscopy of Shock-Wave Heating and Compression in Laser-Driven Planar Foil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawada, H.; Regan, S.P.; Radha, P.B.

    Time-resolved Al 1s-2p absorption spectroscopy is used to diagnose direct-drive, shock-wave heating and compression of planar targets having nearly Fermi-degenerate plasma conditions (Te ~ 10–40 eV, rho ~ 3–11 g/cm^3) on the OMEGA Laser System [T. R. Boehly et al., Opt. Commun. 133, 495 (1997)]. A planar plastic foil with a buried Al tracer layer was irradiated with peak intensities of 10^14–10^15 W/cm^2 and probed with the pseudocontinuum M-band emission from a point-source Sm backlighter in the range of 1.4–1.7 keV. The laser ablation process launches 10–70 Mbar shock waves into the CH/Al/CH target. The Al 1s-2p absorption spectra weremore » analyzed using the atomic physic code PRISMSPECT to infer Te and rho in the Al layer, assuming uniform plasma conditions during shock-wave heating, and to determine when the heat front penetrated the Al layer. The drive foils were simulated with the one-dimensional hydrodynamics code LILAC using a flux-limited (f =0.06 and f =0.1) and nonlocal thermal-transport model [V. N. Goncharov et al., Phys. Plasmas 13, 012702 (2006)]. The predictions of simulated shock-wave heating and the timing of heat-front penetration are compared to the observations. The experimental results for a wide variety of laser-drive conditions and buried depths have shown that the LILAC predictions using f = 0.06 and the nonlocal model accurately model the shock-wave heating and timing of the heat-front penetration while the shock is transiting the target. The observed discrepancy between the measured and simulated shock-wave heating at late times of the drive can be explained by the reduced radiative heating due to lateral heat flow in the corona.« less

  9. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  10. Distributed Coding of Compressively Sensed Sources

    NASA Astrophysics Data System (ADS)

    Goukhshtein, Maxim

    In this work we propose a new method for compressing multiple correlated sources with a very low-complexity encoder in the presence of side information. Our approach uses ideas from compressed sensing and distributed source coding. At the encoder, syndromes of the quantized compressively sensed sources are generated and transmitted. The decoder uses side information to predict the compressed sources. The predictions are then used to recover the quantized measurements via a two-stage decoding process consisting of bitplane prediction and syndrome decoding. Finally, guided by the structure of the sources and the side information, the sources are reconstructed from the recovered measurements. As a motivating example, we consider the compression of multispectral images acquired on board satellites, where resources, such as computational power and memory, are scarce. Our experimental results exhibit a significant improvement in the rate-distortion trade-off when compared against approaches with similar encoder complexity.

  11. Onset of hydrodynamic mix in high-velocity, highly compressed inertial confinement fusion implosions.

    PubMed

    Ma, T; Patel, P K; Izumi, N; Springer, P T; Key, M H; Atherton, L J; Benedetti, L R; Bradley, D K; Callahan, D A; Celliers, P M; Cerjan, C J; Clark, D S; Dewald, E L; Dixit, S N; Döppner, T; Edgell, D H; Epstein, R; Glenn, S; Grim, G; Haan, S W; Hammel, B A; Hicks, D; Hsing, W W; Jones, O S; Khan, S F; Kilkenny, J D; Kline, J L; Kyrala, G A; Landen, O L; Le Pape, S; MacGowan, B J; Mackinnon, A J; MacPhee, A G; Meezan, N B; Moody, J D; Pak, A; Parham, T; Park, H-S; Ralph, J E; Regan, S P; Remington, B A; Robey, H F; Ross, J S; Spears, B K; Smalyuk, V; Suter, L J; Tommasini, R; Town, R P; Weber, S V; Lindl, J D; Edwards, M J; Glenzer, S H; Moses, E I

    2013-08-23

    Deuterium-tritium inertial confinement fusion implosion experiments on the National Ignition Facility have demonstrated yields ranging from 0.8 to 7×10(14), and record fuel areal densities of 0.7 to 1.3 g/cm2. These implosions use hohlraums irradiated with shaped laser pulses of 1.5-1.9 MJ energy. The laser peak power and duration at peak power were varied, as were the capsule ablator dopant concentrations and shell thicknesses. We quantify the level of hydrodynamic instability mix of the ablator into the hot spot from the measured elevated absolute x-ray emission of the hot spot. We observe that DT neutron yield and ion temperature decrease abruptly as the hot spot mix mass increases above several hundred ng. The comparison with radiation-hydrodynamic modeling indicates that low mode asymmetries and increased ablator surface perturbations may be responsible for the current performance.

  12. Theory of strong turbulence by renormalization

    NASA Technical Reports Server (NTRS)

    Tchen, C. M.

    1981-01-01

    The hydrodynamical equations of turbulent motions are inhomogeneous and nonlinear in their inertia and force terms and will generate a hierarchy. A kinetic method was developed to transform the hydrodynamic equations into a master equation governing the velocity distribution, as a function of the time, the position and the velocity as an independent variable. The master equation presents the advantage of being homogeneous and having fewer nonlinear terms and is therefore simpler for the investigation of closure. After the closure by means of a cascade scaling procedure, the kinetic equation is derived and possesses a memory which represents the nonMarkovian character of turbulence. The kinetic equation is transformed back to the hydrodynamical form to yield an energy balance in the cascade form. Normal and anomalous transports are analyzed. The theory is described for incompressible, compressible and plasma turbulence. Applications of the method to problems relating to sound generation and the propagation of light in a nonfrozen turbulence are considered.

  13. Stanley Corrsin Award Talk: The role of singularities in hydrodynamics

    NASA Astrophysics Data System (ADS)

    Eggers, Jens

    2017-11-01

    If a tap is opened slowly, a drop will form. The separation of the drop is described by a singularity of the Navier-Stokes equation with a free surface. Shock waves are singular solutions of the equations of ideal, compressible hydrodynamics. These examples show that singularities are characteristic for the tendency of the hydrodynamic equations to develop small scale features spontaneously, starting from smooth initial conditions. As a result, new structures are created, which form the building blocks of more complicated flows. The mathematical structure of singularities is self-similar, and their characteristics are fixed by universal properties. This will be illustrated by physical examples, as well as by applications to engineering problems such as printing, coating, or air entrainment. Finally, more recent developments will be discussed: the increasing complexity underlying the self-similar behavior of some singularities, and the spatial structure of shock waves.

  14. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  15. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, James W.

    1988-08-02

    Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.

  16. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, James W.

    1988-01-01

    Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.

  17. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, J.W.K.

    1987-10-14

    Hybrid-drive implosion systems for ICF targets are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator surroundingly disposed around fusion fuel. The ablator is first compressed to higher density by a laser system, or by an ion beam system, that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system that is optimized for this second phase of operation of the target. The fusion fuel is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion. 3 figs.

  18. Use of hydrodynamic forces to engineer cartilaginous tissues resembling the non-uniform structure and function of meniscus.

    PubMed

    Marsano, Anna; Wendt, David; Raiteri, Roberto; Gottardi, Riccardo; Stolz, Martin; Wirz, Dieter; Daniels, Alma U; Salter, Donald; Jakob, Marcel; Quinn, Thomas M; Martin, Ivan

    2006-12-01

    The aim of this study was to demonstrate that differences in the local composition of bi-zonal fibrocartilaginous tissues result in different local biomechanical properties in compression and tension. Bovine articular chondrocytes were loaded into hyaluronan-based meshes (HYAFF-11) and cultured for 4 weeks in mixed flask, a rotary Cell Culture System (RCCS), or statically. Resulting tissues were assessed histologically, immunohistochemically, by scanning electron microscopy and mechanically in different regions. Local mechanical analyses in compression and tension were performed by indentation-type scanning force microscopy and by tensile tests on punched out concentric rings, respectively. Tissues cultured in mixed flask or RCCS displayed an outer region positively stained for versican and type I collagen, and an inner region positively stained for glycosaminoglycans and types I and II collagen. The outer fibrocartilaginous capsule included bundles (up to 2 microm diameter) of collagen fibers and was stiffer in tension (up to 3.6-fold higher elastic modulus), whereas the inner region was stiffer in compression (up to 3.8-fold higher elastic modulus). Instead, molecule distribution and mechanical properties were similar in the outer and inner regions of statically grown tissues. In conclusion, exposure of articular chondrocyte-based constructs to hydrodynamic flow generated tissues with locally different composition and mechanical properties, resembling some aspects of the complex structure and function of the outer and inner zones of native meniscus.

  19. Symmetry tuning of a near one-dimensional 2-shock platform for code validation at the National Ignition Facility

    DOE PAGES

    Khan, S. F.; MacLaren, S. A.; Salmonson, J. D.; ...

    2016-04-27

    Here, we introduce a new quasi 1-D implosion experimental platform at the National Ignition Facility designed to validate physics models as well as to study various Inertial Confinement Fusion aspects such as implosion symmetry, convergence, hydrodynamic instabilities, and shock timing. The platform has been developed to maintain shell sphericity throughout the compression phase and produce a round hot core at stagnation. This platform utilizes a 2-shock 1 MJ pulse with 340 TW peak power in a near-vacuum AuHohlraum and a CH ablator capsule uniformly doped with 1% Si. We also performed several inflight radiography, symmetry capsule, and shock timing experimentsmore » in order to tune the symmetry of the capsule to near round throughout several epochs of the implosion. Finally, adjusting the relative powers of the inner and outer cones of beams has allowed us to control the drive at the poles and equator of the capsule, thus providing the mechanism to achieve a spherical capsule convergence. Details and results of the tuning experiments are described.« less

  20. Galaxies and gas in a cold dark matter universe

    NASA Technical Reports Server (NTRS)

    Katz, Neal; Hernquist, Lars; Weinberg, David H.

    1992-01-01

    We use a combined gravity/hydrodynamics code to simulate the formation of structure in a random 22 Mpc cube of a cold dark matter universe. Adiabatic compression and shocks heat much of the gas to temperatures of 10 exp 6 - 10 exp 7 K, but a fraction of the gas cools radiatively to about 10 exp 4 K and condenses into discrete, highly overdense lumps. We identify these lumps with galaxies. The high-mass end of their baryonic mass function fits the form of the observed galaxy luminosity function. They retain independent identities after their dark halos merge, so gravitational clustering produces groups of galaxies embedded in relatively smooth envelopes of hot gas and dark matter. The galaxy correlation function is approximately an r exp -2.1 power law from separations of 35 kpc to 7 Mpc. Galaxy fluctuations are biased relative to dark matter fluctuations by a factor b about 1.5. We find no significant 'velocity bias' between galaxies and dark matter particles. However, virial analysis of the simulation's richest group leads to an estimated Omega of about 0.3, even though the simulation adopts Omega = 1.

  1. A Computational Study of a Circular Interface Richtmyer-Meshkov Instability in MHD

    NASA Astrophysics Data System (ADS)

    Maxon, William; Black, Wolfgang; Denissen, Nicholas; McFarland, Jacob; Los Alamos National Laboratory Collaboration; University of Missouri Shock Tube Laboratory Team

    2017-11-01

    The Richtmyer-Meshkov instability (RMI) is a hydrodynamic instability that appears in several high energy density applications such as inertial confinement fusion (ICF). In ICF, as the thermonuclear fuel is being compressed it begins to mix due to fluid instabilities including the RMI. This mixing greatly decreases the energy output. The RMI occurs when two fluids of different densities are impulsively accelerated and the pressure and density gradients are misaligned. In magnetohydrodynamics (MHD), the RMI may be suppressed by introducing a magnetic field in an electrically conducting fluid, such as a plasma. This suppression has been studied as a possible mechanism for improving confinement in ICF targets. In this study,ideal MHD simulations are performed with a circular interface impulsively accelerated by a shock wave in the presence of a magnetic field. These simulations are executed with the research code FLAG, a multiphysics, arbitrary Lagrangian/Eulerian, hydrocode developed and utilized at Los Alamos National Laboratory. The simulation results will be assessed both quantitatively and qualitatively to examine the stabilization mechanism. These simulations will guide ongoing MHD experiments at the University of Missouri Shock Tube Facility.

  2. The Scylla Multi-Code Comparison Project

    NASA Astrophysics Data System (ADS)

    Maller, Ariyeh; Stewart, Kyle; Bullock, James; Oñorbe, Jose; Scylla Team

    2016-01-01

    Cosmological hydrodynamical simulations are one of the main techniques used to understand galaxy formation and evolution. However, it is far from clear to what extent different numerical techniques and different implementations of feedback yield different results. The Scylla Multi-Code Comparison Project seeks to address this issue by running idenitical initial condition simulations with different popular hydrodynamic galaxy formation codes. Here we compare simulations of a Milky Way mass halo using the codes enzo, ramses, art, arepo and gizmo-psph. The different runs produce galaxies with a variety of properties. There are many differences, but also many similarities. For example we find that in all runs cold flow disks exist; extended gas structures, far beyond the galactic disk, that show signs of rotation. Also, the angular momentum of warm gas in the halo is much larger than the angular momentum of the dark matter. We also find notable differences between runs. The temperature and density distribution of hot gas can differ by over an order of magnitude between codes and the stellar mass to halo mass relation also varies widely. These results suggest that observations of galaxy gas halos and the stellar mass to halo mass relation can be used to constarin the correct model of feedback.

  3. Compressed domain indexing of losslessly compressed images

    NASA Astrophysics Data System (ADS)

    Schaefer, Gerald

    2001-12-01

    Image retrieval and image compression have been pursued separately in the past. Only little research has been done on a synthesis of the two by allowing image retrieval to be performed directly in the compressed domain of images without the need to uncompress them first. In this paper methods for image retrieval in the compressed domain of losslessly compressed images are introduced. While most image compression techniques are lossy, i.e. discard visually less significant information, lossless techniques are still required in fields like medical imaging or in situations where images must not be changed due to legal reasons. The algorithms in this paper are based on predictive coding methods where a pixel is encoded based on the pixel values of its (already encoded) neighborhood. The first method is based on an understanding that predictively coded data is itself indexable and represents a textural description of the image. The second method operates directly on the entropy encoded data by comparing codebooks of images. Experiments show good image retrieval results for both approaches.

  4. Multidimensional incremental parsing for universal source coding.

    PubMed

    Bae, Soo Hyun; Juang, Biing-Hwang

    2008-10-01

    A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.

  5. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  6. PelePhysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-05-17

    PelePhysics is a suite of physics packages that provides functionality of use to reacting hydrodynamics CFD codes. The initial release includes an interface to reaction rate mechanism evaluation, transport coefficient evaluation, and a generalized equation of state (EOS) facility. Both generic evaluators and interfaces to code from externally available tools (Fuego for chemical rates, EGLib for transport coefficients) are provided.

  7. HUFF, a One-Dimensional Hydrodynamics Code for Strong Shocks

    DTIC Science & Technology

    1978-12-01

    results for two sample problems. The first problem discussed is a one-kiloton nuclear burst in infinite sea level air. The second problem is the one...of HUFF as an effective first order hydro- dynamic computer code. 1 KT Explosion The one-kiloton nuclear explosion in infinite sea level air was

  8. A soft X-ray source based on a low divergence, high repetition rate ultraviolet laser

    NASA Astrophysics Data System (ADS)

    Crawford, E. A.; Hoffman, A. L.; Milroy, R. D.; Quimby, D. C.; Albrecht, G. F.

    The CORK code is utilized to evaluate the applicability of low divergence ultraviolet lasers for efficient production of soft X-rays. The use of the axial hydrodynamic code wih one ozone radial expansion to estimate radial motion and laser energy is examined. The calculation of ionization levels of the plasma and radiation rates by employing the atomic physics and radiation model included in the CORK code is described. Computations using the hydrodynamic code to determine the effect of laser intensity, spot size, and wavelength on plasma electron temperature are provided. The X-ray conversion efficiencies of the lasers are analyzed. It is observed that for a 1 GW laser power the X-ray conversion efficiency is a function of spot size, only weakly dependent on pulse length for time scales exceeding 100 psec, and better conversion efficiencies are obtained at shorter wavelengths. It is concluded that these small lasers focused to 30 micron spot sizes and 10 to the 14th W/sq cm intensities are useful sources of 1-2 keV radiation.

  9. Simulation and Analysis of Converging Shock Wave Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Shashkov, Mikhail J.

    2012-06-21

    Results and analysis pertaining to the simulation of the Guderley converging shock wave test problem (and associated code verification hydrodynamics test problems involving converging shock waves) in the LANL ASC radiation-hydrodynamics code xRAGE are presented. One-dimensional (1D) spherical and two-dimensional (2D) axi-symmetric geometric setups are utilized and evaluated in this study, as is an instantiation of the xRAGE adaptive mesh refinement capability. For the 2D simulations, a 'Surrogate Guderley' test problem is developed and used to obviate subtleties inherent to the true Guderley solution's initialization on a square grid, while still maintaining a high degree of fidelity to the originalmore » problem, and minimally straining the general credibility of associated analysis and conclusions.« less

  10. Progenitors of Core-Collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Hirschi, R.; Arnett, D.; Cristini, A.; Georgy, C.; Meakin, C.; Walkington, I.

    2017-02-01

    Massive stars have a strong impact on their surroundings, in particular when they produce a core-collapse supernova at the end of their evolution. In these proceedings, we review the general evolution of massive stars and their properties at collapse as well as the transition between massive and intermediate-mass stars. We also summarise the effects of metallicity and rotation. We then discuss some of the major uncertainties in the modelling of massive stars, with a particular emphasis on the treatment of convection in 1D stellar evolution codes. Finally, we present new 3D hydrodynamic simulations of convection in carbon burning and list key points to take from 3D hydrodynamic studies for the development of new prescriptions for convective boundary mixing in 1D stellar evolution codes.

  11. Analysis of direct-drive capsule compression experiments on the Iskra-5 laser facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gus'kov, S. Yu.; Demchenko, N. N.; Zhidkov, N. V.

    2010-09-15

    We have analyzed and numerically simulated our experiments on the compression of DT-gas-filled glass capsules under irradiation by a small number of beams on the Iskra-5 facility (12 beams) at the second harmonic of an iodine laser ({lambda} = 0.66 {mu}m) for a laser pulse energy of 2 kJ and duration of 0.5 ns in the case of asymmetric irradiation and compression. Our simulations include the construction of a target illumination map and a histogram of the target surface illumination distribution; 1D capsule compression simulations based on the DIANA code corresponding to various target surface regions; and 2D compression simulationsmore » based on the NUTCY code corresponding to the illumination conditions. We have succeeded in reproducing the shape of the compressed region at the time of maximum compression and the reduction in neutron yield (compared to the 1D simulations) to the experimentally observed values. For the Iskra-5 conditions, we have considered targets that can provide a more symmetric compression and a higher neutron yield.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, Andrew F.; Marzari, Francesco

    Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less

  13. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  14. Joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Miao; Tong, Xiaojun

    2017-07-01

    This paper proposes a joint image encryption and compression scheme based on a new hyperchaotic system and curvelet transform. A new five-dimensional hyperchaotic system based on the Rabinovich system is presented. By means of the proposed hyperchaotic system, a new pseudorandom key stream generator is constructed. The algorithm adopts diffusion and confusion structure to perform encryption, which is based on the key stream generator and the proposed hyperchaotic system. The key sequence used for image encryption is relation to plain text. By means of the second generation curvelet transform, run-length coding, and Huffman coding, the image data are compressed. The joint operation of compression and encryption in a single process is performed. The security test results indicate the proposed methods have high security and good compression effect.

  15. Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining

    NASA Astrophysics Data System (ADS)

    Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio

    2013-12-01

    Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.

  16. Hydrodynamic Flow Control in Marine Mammals

    DTIC Science & Technology

    2008-05-06

    body- bound vorticity ( Wolfgang et al. 1999). The vorticity is smoothly propagated along the flexing body toward the tail. This vorticity is eventually...and Reichley 1985; Dolphin 1988; Pauly et al. 1998). Whales lunge toward their prey at 2.6 m/s (Jurasz and Jurasz 1979; Hain et al. 1982). The...unsteady RANS CFD code for ship hydrodynamics. IIHR Hydroscience and Engineering Report 531. Iowa City (IA): The University of Iowa. Pauly D, Trites

  17. Distributed Joint Source-Channel Coding in Wireless Sensor Networks

    PubMed Central

    Zhu, Xuqi; Liu, Yu; Zhang, Lin

    2009-01-01

    Considering the fact that sensors are energy-limited and the wireless channel conditions in wireless sensor networks, there is an urgent need for a low-complexity coding method with high compression ratio and noise-resisted features. This paper reviews the progress made in distributed joint source-channel coding which can address this issue. The main existing deployments, from the theory to practice, of distributed joint source-channel coding over the independent channels, the multiple access channels and the broadcast channels are introduced, respectively. To this end, we also present a practical scheme for compressing multiple correlated sources over the independent channels. The simulation results demonstrate the desired efficiency. PMID:22408560

  18. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  19. Application of CHAD hydrodynamics to shock-wave problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.

    1997-12-31

    CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, itmore » is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.« less

  20. Compress compound images in H.264/MPGE-4 AVC by exploiting spatial correlation.

    PubMed

    Lan, Cuiling; Shi, Guangming; Wu, Feng

    2010-04-01

    Compound images are a combination of text, graphics and natural image. They present strong anisotropic features, especially on the text and graphics parts. These anisotropic features often render conventional compression inefficient. Thus, this paper proposes a novel coding scheme from the H.264 intraframe coding. In the scheme, two new intramodes are developed to better exploit spatial correlation in compound images. The first is the residual scalar quantization (RSQ) mode, where intrapredicted residues are directly quantized and coded without transform. The second is the base colors and index map (BCIM) mode that can be viewed as an adaptive color quantization. In this mode, an image block is represented by several representative colors, referred to as base colors, and an index map to compress. Every block selects its coding mode from two new modes and the previous intramodes in H.264 by rate-distortion optimization (RDO). Experimental results show that the proposed scheme improves the coding efficiency even more than 10 dB at most bit rates for compound images and keeps a comparable efficient performance to H.264 for natural images.

  1. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  2. Combining image-processing and image compression schemes

    NASA Technical Reports Server (NTRS)

    Greenspan, H.; Lee, M.-C.

    1995-01-01

    An investigation into the combining of image-processing schemes, specifically an image enhancement scheme, with existing compression schemes is discussed. Results are presented on the pyramid coding scheme, the subband coding scheme, and progressive transmission. Encouraging results are demonstrated for the combination of image enhancement and pyramid image coding schemes, especially at low bit rates. Adding the enhancement scheme to progressive image transmission allows enhanced visual perception at low resolutions. In addition, further progressing of the transmitted images, such as edge detection schemes, can gain from the added image resolution via the enhancement.

  3. Parallel design of JPEG-LS encoder on graphics processing units

    NASA Astrophysics Data System (ADS)

    Duan, Hao; Fang, Yong; Huang, Bormin

    2012-01-01

    With recent technical advances in graphic processing units (GPUs), GPUs have outperformed CPUs in terms of compute capability and memory bandwidth. Many successful GPU applications to high performance computing have been reported. JPEG-LS is an ISO/IEC standard for lossless image compression which utilizes adaptive context modeling and run-length coding to improve compression ratio. However, adaptive context modeling causes data dependency among adjacent pixels and the run-length coding has to be performed in a sequential way. Hence, using JPEG-LS to compress large-volume hyperspectral image data is quite time-consuming. We implement an efficient parallel JPEG-LS encoder for lossless hyperspectral compression on a NVIDIA GPU using the computer unified device architecture (CUDA) programming technology. We use the block parallel strategy, as well as such CUDA techniques as coalesced global memory access, parallel prefix sum, and asynchronous data transfer. We also show the relation between GPU speedup and AVIRIS block size, as well as the relation between compression ratio and AVIRIS block size. When AVIRIS images are divided into blocks, each with 64×64 pixels, we gain the best GPU performance with 26.3x speedup over its original CPU code.

  4. General Equation Set Solver for Compressible and Incompressible Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Sondak, Douglas L.; Dorney, Daniel J.

    2002-01-01

    Turbomachines for propulsion applications operate with many different working fluids and flow conditions. The flow may be incompressible, such as in the liquid hydrogen pump in a rocket engine, or supersonic, such as in the turbine which may drive the hydrogen pump. Separate codes have traditionally been used for incompressible and compressible flow solvers. The General Equation Set (GES) method can be used to solve both incompressible and compressible flows, and it is not restricted to perfect gases, as are many compressible-flow turbomachinery solvers. An unsteady GES turbomachinery flow solver has been developed and applied to both air and water flows through turbines. It has been shown to be an excellent alternative to maintaining two separate codes.

  5. Unsteady non-Newtonian hydrodynamics in granular gases.

    PubMed

    Astillero, Antonio; Santos, Andrés

    2012-02-01

    The temporal evolution of a dilute granular gas, both in a compressible flow (uniform longitudinal flow) and in an incompressible flow (uniform shear flow), is investigated by means of the direct simulation Monte Carlo method to solve the Boltzmann equation. Emphasis is laid on the identification of a first "kinetic" stage (where the physical properties are strongly dependent on the initial state) subsequently followed by an unsteady "hydrodynamic" stage (where the momentum fluxes are well-defined non-Newtonian functions of the rate of strain). The simulation data are seen to support this two-stage scenario. Furthermore, the rheological functions obtained from simulation are well described by an approximate analytical solution of a model kinetic equation. © 2012 American Physical Society

  6. Experimental measurements of hydrodynamic instabilities on NOVA of relevance to astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budil, K S; Cherfils, C; Drake, R P

    1998-09-11

    Large lasers such as Nova allow the possibility of achieving regimes of high energy densities in plasmas of millimeter spatial scales and nanosecond time scales. In those plasmas where thermal conductivity and viscosity do not play a significant role, the hydrodynamic evolution is suitable for benchmarking hydrodynamics modeling in astrophysical codes. Several experiments on Nova examine hydrodynamically unstable interfaces. A typical Nova experiment uses a gold millimeter-scale hohlraum to convert the laser energy to a 200 eV blackbody source lasting about a nanosecond. The x-rays ablate a planar target, generating a series of shocks and accelerating the target. The evolvingmore » area1 density is diagnosed by time-resolved radiography, using a second x-ray source. Data from several experiments are presented and diagnostic techniques are discussed.« less

  7. Nonlinear pulse compression in pulse-inversion fundamental imaging.

    PubMed

    Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi

    2007-04-01

    Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.

  8. Scan-Line Methods in Spatial Data Systems

    DTIC Science & Technology

    1990-09-04

    algorithms in detail to show some of the implementation issues. Data Compression Storage and transmission times can be reduced by using compression ...goes through the data . Luckily, there are good one-directional compression algorithms , such as run-length coding 13 in which each scan line can be...independently compressed . These are the algorithms to use in a parallel scan-line system. Data compression is usually only used for long-term storage of

  9. Data Compression Using the Dictionary Approach Algorithm

    DTIC Science & Technology

    1990-12-01

    Compression Technique The LZ77 is an OPM/L data compression scheme suggested by Ziv and Lempel . A slightly modified...June 1984. 12. Witten H. I., Neal M. R. and Cleary G. J., Arithmetic Coding For Data Compression , Communication ACM June 1987. 13. Ziv I. and Lempel A...AD-A242 539 NAVAL POSTGRADUATE SCHOOL Monterey, California DTIC NOV 181991 0 THESIS DATA COMPRESSION USING THE DICTIONARY APPROACH ALGORITHM

  10. BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package

    NASA Astrophysics Data System (ADS)

    Mitran, Sorin

    2011-04-01

    The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.

  11. Blast Fragmentation Modeling and Analysis

    DTIC Science & Technology

    2010-10-31

    weapons device containing a multiphase blast explosive (MBX). 1. INTRODUCTION The ARL Survivability Lethality and Analysis Directorate (SLAD) is...velocity. In order to simulate the highly complex phenomenon, the exploding cylinder is modeled with the hydrodynamics code ALE3D , an arbitrary...Lagrangian-Eulerian multiphysics code, developed at Lawrence Livermore National Laboratory. ALE3D includes physical properties, constitutive models for

  12. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  13. Turbulence modeling for hypersonic flight

    NASA Technical Reports Server (NTRS)

    Bardina, Jorge E.

    1992-01-01

    The objective of the present work is to develop, verify, and incorporate two equation turbulence models which account for the effect of compressibility at high speeds into a three dimensional Reynolds averaged Navier-Stokes code and to provide documented model descriptions and numerical procedures so that they can be implemented into the National Aerospace Plane (NASP) codes. A summary of accomplishments is listed: (1) Four codes have been tested and evaluated against a flat plate boundary layer flow and an external supersonic flow; (2) a code named RANS was chosen because of its speed, accuracy, and versatility; (3) the code was extended from thin boundary layer to full Navier-Stokes; (4) the K-omega two equation turbulence model has been implemented into the base code; (5) a 24 degree laminar compression corner flow has been simulated and compared to other numerical simulations; and (6) work is in progress in writing the numerical method of the base code including the turbulence model.

  14. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, andmore » data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.« less

  15. Magneto-hydrodynamic simulations of Heavy Ion Collisions with ECHO-QGP

    NASA Astrophysics Data System (ADS)

    Inghirami, G.; Del Zanna, L.; Beraudo, A.; Haddadi Moghaddam, M.; Becattini, F.; Bleicher, M.

    2018-05-01

    It is believed that very strong magnetic fields may induce many interesting physical effects in the Quark Gluon Plasma, like the Chiral Magnetic Effect, the Chiral Separation Effect, a modification of the critical temperature or changes in the collective flow of the emitted particles. However, in the hydrodynamic numerical simulations of Heavy Ion Collisions the magnetic fields have been either neglected or considered as external fields which evolve independently from the dynamics of the fluid. To address this issue, we recently modified the ECHO-QGP code, including for the first time the effects of electromagnetic fields in a consistent way, although in the limit of an infinite electrical conductivity of the plasma (ideal magnetohydrodynamics). In this proceedings paper we illustrate the underlying 3+1 formalisms of the current version of the code and we present the results of its basic preliminary application in a simple case. We conclude with a brief discussion of the possible further developments and future uses of the code, from RHIC to FAIR collision energies.

  16. 2D Implosion Simulations with a Kinetic Particle Code

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Even, Wesley; Strother, Terrance

    2017-10-01

    Many problems in laboratory and plasma physics are subject to flows that move between the continuum and the kinetic regime. We discuss two-dimensional (2D) implosion simulations that were performed using a Monte Carlo kinetic particle code. The application of kinetic transport theory is motivated, in part, by the occurrence of non-equilibrium effects in inertial confinement fusion (ICF) capsule implosions, which cannot be fully captured by hydrodynamics simulations. Kinetic methods, on the other hand, are able to describe both, continuum and rarefied flows. We perform simple 2D disk implosion simulations using one particle species and compare the results to simulations with the hydrodynamics code RAGE. The impact of the particle mean-free-path on the implosion is also explored. In a second study, we focus on the formation of fluid instabilities from induced perturbations. I.S. acknowledges support through the Director's fellowship from Los Alamos National Laboratory. This research used resources provided by the LANL Institutional Computing Program.

  17. An efficient system for reliably transmitting image and video data over low bit rate noisy channels

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.

    1994-01-01

    This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.

  18. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1997-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  19. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  20. Microsecond ramp compression of a metallic liner driven by a 5 MA current on the SPHINX machine using a dynamic load current multiplier pulse shaping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Almeida, T.; Lassalle, F.; Morell, A.

    SPHINX is a 6 MA, 1-μs Linear Transformer Driver (LTD) operated by the CEA Gramat (France) and primarily used for imploding Z-pinch loads for radiation effects studies. Among the options that are currently being evaluated to improve the generator performances are an upgrade to a 20 MA, 1-μs LTD machine and various power amplification schemes, including a compact Dynamic Load Current Multiplier (DLCM). A method for performing magnetic ramp compression experiments, without modifying the generator operation scheme, was developed using the DLCM to shape the initial current pulse in order to obtain the desired load current profile. In this paper,more » we discuss the overall configuration that was selected for these experiments, including the choice of a coaxial cylindrical geometry for the load and its return current electrode. We present both 3-D Magneto-hydrodynamic and 1D Lagrangian hydrodynamic simulations which helped guide the design of the experimental configuration. Initial results obtained over a set of experiments on an aluminium cylindrical liner, ramp-compressed to a peak pressure of 23 GPa, are presented and analyzed. Details of the electrical and laser Doppler interferometer setups used to monitor and diagnose the ramp compression experiments are provided. In particular, the configuration used to field both homodyne and heterodyne velocimetry diagnostics in the reduced access available within the liner's interior is described. Current profiles measured at various critical locations across the system, particularly the load current, enabled a comprehensive tracking of the current circulation and demonstrate adequate pulse shaping by the DLCM. The liner inner free surface velocity measurements obtained from the heterodyne velocimeter agree with the hydrocode results obtained using the measured load current as the input. An extensive hydrodynamic analysis is carried out to examine information such as pressure and particle velocity history profiles or magnetic diffusion across the liner. The potential of the technique in terms of applications and achievable ramp pressure levels lies in the prospects for improving the DLCM efficiency through the use of a closing switch (currently under development), reducing the load dimensions and optimizing the diagnostics.« less

  1. JPEG2000 still image coding quality.

    PubMed

    Chen, Tzong-Jer; Lin, Sheng-Chieh; Lin, You-Chen; Cheng, Ren-Gui; Lin, Li-Hui; Wu, Wei

    2013-10-01

    This work demonstrates the image qualities between two popular JPEG2000 programs. Two medical image compression algorithms are both coded using JPEG2000, but they are different regarding the interface, convenience, speed of computation, and their characteristic options influenced by the encoder, quantization, tiling, etc. The differences in image quality and compression ratio are also affected by the modality and compression algorithm implementation. Do they provide the same quality? The qualities of compressed medical images from two image compression programs named Apollo and JJ2000 were evaluated extensively using objective metrics. These algorithms were applied to three medical image modalities at various compression ratios ranging from 10:1 to 100:1. Following that, the quality of the reconstructed images was evaluated using five objective metrics. The Spearman rank correlation coefficients were measured under every metric in the two programs. We found that JJ2000 and Apollo exhibited indistinguishable image quality for all images evaluated using the above five metrics (r > 0.98, p < 0.001). It can be concluded that the image quality of the JJ2000 and Apollo algorithms is statistically equivalent for medical image compression.

  2. Comparing Split and Unsplit Numerical Methods for Simulating Low and High Mach Number Turbulent Flows in Xrage

    NASA Astrophysics Data System (ADS)

    Saenz, Juan; Grinstein, Fernando; Dolence, Joshua; Rauenzahn, Rick; Masser, Thomas; Francois, Marianne; LANL Team

    2017-11-01

    We report progress in evaluating an unsplit hydrodynamic solver being implemented in the radiation adaptive grid Eulerian (xRAGE) code, and compare to a split scheme. xRage is a Eulerian hydrodynamics code used for implicit large eddy simulations (ILES) of multi-material, multi-physics flows where low and high Mach number (Ma) processes and instabilities interact and co-exist. The hydrodynamic solver in xRAGE uses a directionally split, second order Godunov, finite volume (FV) scheme. However, a standard, unsplit, Godunov-type FV scheme with 2nd and 3rd order reconstruction options, low Ma correction and a variety of Riemann solvers has recently become available. To evaluate the hydrodynamic solvers for turbulent low Ma flows, we use simulations of the Taylor Green Vortex (TGV), where there is a transition to turbulence via vortex stretching and production of small-scale eddies. We also simulate a high-low Ma shock-tube flow, where a shock passing over a perturbed surface generates a baroclinic Richtmyer-Meshkov instability (RMI); after the shock has passed, the turbulence in the accelerated interface region resembles Rayleigh Taylor (RT) instability. We compare turbulence spectra and decay in simulated TGV flows, and we present progress in simulating the high-low Ma RMI-RT flow. LANL is operated by LANS LLC for the U.S. DOE NNSA under Contract No. DE-AC52-06NA25396.

  3. Experiences and results multitasking a hydrodynamics code on global and local memory machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandell, D.

    1987-01-01

    A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained.more » The problems of debugging on the different machines are also described.« less

  4. [Lossless ECG compression algorithm with anti- electromagnetic interference].

    PubMed

    Guan, Shu-An

    2005-03-01

    Based on the study of ECG signal features, a new lossless ECG compression algorithm is put forward here. We apply second-order difference operation with anti- electromagnetic interference to original ECG signals and then, compress the result by the escape-based coding model. In spite of serious 50Hz-interference, the algorithm is still capable of obtaining a high compression ratio.

  5. Solutions of conformal Israel-Stewart relativistic viscous fluid dynamics

    NASA Astrophysics Data System (ADS)

    Marrochio, Hugo; Noronha, Jorge; Denicol, Gabriel S.; Luzum, Matthew; Jeon, Sangyong; Gale, Charles

    2015-01-01

    We use symmetry arguments developed by Gubser to construct the first radially expanding explicit solutions of the Israel-Stewart formulation of hydrodynamics. Along with a general semi-analytical solution, an exact analytical solution is given which is valid in the cold plasma limit where viscous effects from shear viscosity and the relaxation time coefficient are important. The radially expanding solutions presented in this paper can be used as nontrivial checks of numerical algorithms employed in hydrodynamic simulations of the quark-gluon plasma formed in ultrarelativistic heavy ion collisions. We show this explicitly by comparing such analytic and semi-analytic solutions with the corresponding numerical solutions obtained using the music viscous hydrodynamics simulation code.

  6. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics.

    PubMed

    Strozzi, D J; Bailey, D S; Michel, P; Divol, L; Sepke, S M; Kerbel, G D; Thomas, C A; Ralph, J E; Moody, J D; Schneider, M B

    2017-01-13

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI-specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)-mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. This model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling and data from hohlraum experiments on wall x-ray emission and capsule implosion shape.

  7. Highly Efficient Compression Algorithms for Multichannel EEG.

    PubMed

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  8. Compton scattering measurements from dense plasmas

    DOE PAGES

    Glenzer, S. H.; Neumayer, P.; Doppner, T.; ...

    2008-06-12

    Here, Compton scattering techniques have been developed for accurate measurements of densities and temperatures in dense plasmas. One future challenge is the application of this technique to characterize compressed matter on the National Ignition Facility where hydrogen and beryllium will approach extremely dense states of matter of up to 1000 g/cc. In this regime, the density, compressibility, and capsule fuel adiabat may be directly measured from the Compton scattered spectrum of a high-energy x-ray line source. Specifically, the scattered spectra directly reflect the electron velocity distribution. In non-degenerate plasmas, the width provides an accurate measure of the electron temperatures, whilemore » in partially Fermi degenerate systems that occur in laser-compressed matter it provides the Fermi energy and hence the electron density. Both of these regimes have been accessed in experiments at the Omega laser by employing isochorically heated solid-density beryllium and moderately compressed beryllium foil targets. In the latter experiment, compressions by a factor of 3 at pressures of 40 Mbar have been measured in excellent agreement with radiation hydrodynamic modeling.« less

  9. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  10. Low bit rate coding of Earth science images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1993-01-01

    In this paper, the authors discuss compression based on some new ideas in vector quantization and their incorporation in a sub-band coding framework. Several variations are considered, which collectively address many of the individual compression needs within the earth science community. The approach taken in this work is based on some recent advances in the area of variable rate residual vector quantization (RVQ). This new RVQ method is considered separately and in conjunction with sub-band image decomposition. Very good results are achieved in coding a variety of earth science images. The last section of the paper provides some comparisons that illustrate the improvement in performance attributable to this approach relative the the JPEG coding standard.

  11. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  12. Moving-mesh cosmology: characteristics of galaxies and haloes

    NASA Astrophysics Data System (ADS)

    Kereš, Dušan; Vogelsberger, Mark; Sijacki, Debora; Springel, Volker; Hernquist, Lars

    2012-09-01

    We discuss cosmological hydrodynamic simulations of galaxy formation performed with the new moving-mesh code AREPO, which promises higher accuracy compared with the traditional smoothed particle hydrodynamics (SPH) technique that has been widely employed for this problem. In this exploratory study, we deliberately limit the complexity of the physical processes followed by the code for ease of comparison with previous calculations, and include only cooling of gas with a primordial composition, heating by a spatially uniform ultraviolet background, and a simple subresolution model for regulating star formation in the dense interstellar medium. We use an identical set of physics in corresponding simulations carried out with the well-tested SPH code GADGET, adopting also the same high-resolution gravity solver. We are thus able to compare both simulation sets on an object-by-object basis, allowing us to cleanly isolate the impact of different hydrodynamical methods on galaxy and halo properties. In accompanying papers, Vogelsberger et al. and Sijacki et al., we focus on an analysis of the global baryonic statistics predicted by the simulation codes, and complementary idealized simulations that highlight the differences between the hydrodynamical schemes. Here we investigate their influence on the baryonic properties of simulated galaxies and their surrounding haloes. We find that AREPO leads to significantly higher star formation rates for galaxies in massive haloes and to more extended gaseous discs in galaxies, which also feature a thinner and smoother morphology than their GADGET counterparts. Consequently, galaxies formed in AREPO have larger sizes and higher specific angular momentum than their SPH correspondents. Interestingly, the more efficient cooling flows in AREPO yield higher densities and lower entropies in halo centres compared to GADGET, whereas the opposite trend is found in halo outskirts. The cooling differences leading to higher star formation rates of massive galaxies in AREPO also slightly increase the baryon content within the virial radius of massive haloes. We show that these differences persist as a function of numerical resolution. While both codes agree to acceptable accuracy on a number of baryonic properties of cosmic structures, our results thus clearly demonstrate that galaxy formation simulations greatly benefit from the use of more accurate hydrodynamical techniques such as AREPO and call into question the reliability of galaxy formation studies in a cosmological context using traditional standard formulations of SPH, such as the one implemented in GADGET. Our new moving-mesh simulations demonstrate that a population of extended gaseous discs of galaxies in large volume cosmological simulations can be formed even without energetic feedback in the form of galactic winds, although such outflows appear required to obtain realistic stellar masses.

  13. Parabolized Navier-Stokes Code for Computing Magneto-Hydrodynamic Flowfields

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B. (Technical Monitor); Tannehill, J. C.

    2003-01-01

    This report consists of two published papers, 'Computation of Magnetohydrodynamic Flows Using an Iterative PNS Algorithm' and 'Numerical Simulation of Turbulent MHD Flows Using an Iterative PNS Algorithm'.

  14. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  15. Nada: A new code for studying self-gravitating tori around black holes

    NASA Astrophysics Data System (ADS)

    Montero, Pedro J.; Font, José A.; Shibata, Masaru

    2008-09-01

    We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.

  16. A point-centered arbitrary Lagrangian Eulerian hydrodynamic approach for tetrahedral meshes

    DOE PAGES

    Morgan, Nathaniel R.; Waltz, Jacob I.; Burton, Donald E.; ...

    2015-02-24

    We present a three dimensional (3D) arbitrary Lagrangian Eulerian (ALE) hydrodynamic scheme suitable for modeling complex compressible flows on tetrahedral meshes. The new approach stores the conserved variables (mass, momentum, and total energy) at the nodes of the mesh and solves the conservation equations on a control volume surrounding the point. This type of an approach is termed a point-centered hydrodynamic (PCH) method. The conservation equations are discretized using an edge-based finite element (FE) approach with linear basis functions. All fluxes in the new approach are calculated at the center of each tetrahedron. A multidirectional Riemann-like problem is solved atmore » the center of the tetrahedron. The advective fluxes are calculated by solving a 1D Riemann problem on each face of the nodal control volume. A 2-stage Runge–Kutta method is used to evolve the solution forward in time, where the advective fluxes are part of the temporal integration. The mesh velocity is smoothed by solving a Laplacian equation. The details of the new ALE hydrodynamic scheme are discussed. Results from a range of numerical test problems are presented.« less

  17. Coding of cognitive magnitude: compressed scaling of numerical information in the primate prefrontal cortex.

    PubMed

    Nieder, Andreas; Miller, Earl K

    2003-01-09

    Whether cognitive representations are better conceived as language-based, symbolic representations or perceptually related, analog representations is a subject of debate. If cognitive processes parallel perceptual processes, then fundamental psychophysical laws should hold for each. To test this, we analyzed both behavioral and neuronal representations of numerosity in the prefrontal cortex of rhesus monkeys. The data were best described by a nonlinearly compressed scaling of numerical information, as postulated by the Weber-Fechner law or Stevens' law for psychophysical/sensory magnitudes. This nonlinear compression was observed on the neural level during the acquisition phase of the task and maintained through the memory phase with no further compression. These results suggest that certain cognitive and perceptual/sensory representations share the same fundamental mechanisms and neural coding schemes.

  18. Toward Sodium X-Ray Diffraction in the High-Pressure Regime

    NASA Astrophysics Data System (ADS)

    Gong, X.; Polsin, D. N.; Rygg, J. R.; Boehly, T. R.; Crandall, L.; Henderson, B. J.; Hu, S. X.; Huff, M.; Saha, R.; Collins, G. W.; Smith, R.; Eggert, J.; Lazicki, A. E.; McMahon, M.

    2017-10-01

    We are working to quasi-isentropically compress sodium into the terapascal regime to test theoretical predictions that sodium transforms to an electride. A series of hydrodynamic simulations have been performed to design experiments to investigate the structure and optical properties of sodium at pressures up to 500 GPa. We show preliminary results where sodium samples, sandwiched between diamond plates and lithium-fluoride windows, are ramp compressed by a gradual increase in the drive-laser intensity. The low sound speed in sodium makes it particularly susceptible to forming a shock; therefore, it is difficult to compress without melting the sample. Powder x-ray diffraction is used to provide information on the structure of sodium at these high pressures. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  19. Video coding for 3D-HEVC based on saliency information

    NASA Astrophysics Data System (ADS)

    Yu, Fang; An, Ping; Yang, Chao; You, Zhixiang; Shen, Liquan

    2016-11-01

    As an extension of High Efficiency Video Coding ( HEVC), 3D-HEVC has been widely researched under the impetus of the new generation coding standard in recent years. Compared with H.264/AVC, its compression efficiency is doubled while keeping the same video quality. However, its higher encoding complexity and longer encoding time are not negligible. To reduce the computational complexity and guarantee the subjective quality of virtual views, this paper presents a novel video coding method for 3D-HEVC based on the saliency informat ion which is an important part of Human Visual System (HVS). First of all, the relationship between the current coding unit and its adjacent units is used to adjust the maximum depth of each largest coding unit (LCU) and determine the SKIP mode reasonably. Then, according to the saliency informat ion of each frame image, the texture and its corresponding depth map will be divided into three regions, that is, salient area, middle area and non-salient area. Afterwards, d ifferent quantization parameters will be assigned to different regions to conduct low complexity coding. Finally, the compressed video will generate new view point videos through the renderer tool. As shown in our experiments, the proposed method saves more bit rate than other approaches and achieves up to highest 38% encoding time reduction without subjective quality loss in compression or rendering.

  20. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  1. Anomalous-hydrodynamic analysis of charge-dependent elliptic flow in heavy-ion collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hongo, Masaru; Hirono, Yuji; Hirano, Tetsufumi

    Anomalous hydrodynamics is a low-energy effective theory that captures effects of quantum anomalies. We develop a numerical code of anomalous hydrodynamics and apply it to dynamics of heavy-ion collisions, where anomalous transports are expected to occur. This is the first attempt to perform fully non-linear numerical simulations of anomalous hydrodynamics. We discuss implications of the simulations for possible experimental observations of anomalous transport effects. From analyses of the charge-dependent elliptic flow parameters (vmore » $$±\\atop{2}$$) as a function of the net charge asymmetry A ±, we find that the linear dependence of Δv$$±\\atop{2}$$ ≡ v$$-\\atop{2}$$ - v$$+\\atop{2}$$ on the net charge asymmetry A ± cannot be regarded as a robust signal of anomalous transports, contrary to previous studies. We, however, find that the intercept Δv$$±\\atop{2}$$ (A ± = 0) is sensitive to anomalous transport effects.« less

  2. Anomalous-hydrodynamic analysis of charge-dependent elliptic flow in heavy-ion collisions

    DOE PAGES

    Hongo, Masaru; Hirono, Yuji; Hirano, Tetsufumi

    2017-12-10

    Anomalous hydrodynamics is a low-energy effective theory that captures effects of quantum anomalies. We develop a numerical code of anomalous hydrodynamics and apply it to dynamics of heavy-ion collisions, where anomalous transports are expected to occur. This is the first attempt to perform fully non-linear numerical simulations of anomalous hydrodynamics. We discuss implications of the simulations for possible experimental observations of anomalous transport effects. From analyses of the charge-dependent elliptic flow parameters (vmore » $$±\\atop{2}$$) as a function of the net charge asymmetry A ±, we find that the linear dependence of Δv$$±\\atop{2}$$ ≡ v$$-\\atop{2}$$ - v$$+\\atop{2}$$ on the net charge asymmetry A ± cannot be regarded as a robust signal of anomalous transports, contrary to previous studies. We, however, find that the intercept Δv$$±\\atop{2}$$ (A ± = 0) is sensitive to anomalous transport effects.« less

  3. StarSmasher: Smoothed Particle Hydrodynamics code for smashing stars and planets

    NASA Astrophysics Data System (ADS)

    Gaburov, Evghenii; Lombardi, James C., Jr.; Portegies Zwart, Simon; Rasio, F. A.

    2018-05-01

    Smoothed Particle Hydrodynamics (SPH) is a Lagrangian particle method that approximates a continuous fluid as discrete nodes, each carrying various parameters such as mass, position, velocity, pressure, and temperature. In an SPH simulation the resolution scales with the particle density; StarSmasher is able to handle both equal-mass and equal number-density particle models. StarSmasher solves for hydro forces by calculating the pressure for each particle as a function of the particle's properties - density, internal energy, and internal properties (e.g. temperature and mean molecular weight). The code implements variational equations of motion and libraries to calculate the gravitational forces between particles using direct summation on NVIDIA graphics cards. Using a direct summation instead of a tree-based algorithm for gravity increases the accuracy of the gravity calculations at the cost of speed. The code uses a cubic spline for the smoothing kernel and an artificial viscosity prescription coupled with a Balsara Switch to prevent unphysical interparticle penetration. The code also implements an artificial relaxation force to the equations of motion to add a drag term to the calculated accelerations during relaxation integrations. Initially called StarCrash, StarSmasher was developed originally by Rasio.

  4. Influence of audio triggered emotional attention on video perception

    NASA Astrophysics Data System (ADS)

    Torres, Freddy; Kalva, Hari

    2014-02-01

    Perceptual video coding methods attempt to improve compression efficiency by discarding visual information not perceived by end users. Most of the current approaches for perceptual video coding only use visual features ignoring the auditory component. Many psychophysical studies have demonstrated that auditory stimuli affects our visual perception. In this paper we present our study of audio triggered emotional attention and it's applicability to perceptual video coding. Experiments with movie clips show that the reaction time to detect video compression artifacts was longer when video was presented with the audio information. The results reported are statistically significant with p=0.024.

  5. Mix and hydrodynamic instabilities on NIF

    NASA Astrophysics Data System (ADS)

    Smalyuk, V. A.; Robey, H. F.; Casey, D. T.; Clark, D. S.; Döppner, T.; Haan, S. W.; Hammel, B. A.; MacPhee, A. G.; Martinez, D.; Milovich, J. L.; Peterson, J. L.; Pickworth, L.; Pino, J. E.; Raman, K.; Tipton, R.; Weber, C. R.; Baker, K. L.; Bachmann, B.; Berzak Hopkins, L. F.; Bond, E.; Caggiano, J. A.; Callahan, D. A.; Celliers, P. M.; Cerjan, C.; Dixit, S. N.; Edwards, M. J.; Felker, S.; Field, J. E.; Fittinghoff, D. N.; Gharibyan, N.; Grim, G. P.; Hamza, A. V.; Hatarik, R.; Hohenberger, M.; Hsing, W. W.; Hurricane, O. A.; Jancaitis, K. S.; Jones, O. S.; Khan, S.; Kroll, J. J.; Lafortune, K. N.; Landen, O. L.; Ma, T.; MacGowan, B. J.; Masse, L.; Moore, A. S.; Nagel, S. R.; Nikroo, A.; Pak, A.; Patel, P. K.; Remington, B. A.; Sayre, D. B.; Spears, B. K.; Stadermann, M.; Tommasini, R.; Widmayer, C. C.; Yeamans, C. B.; Crippen, J.; Farrell, M.; Giraldez, E.; Rice, N.; Wilde, C. H.; Volegov, P. L.; Gatu Johnson, M.

    2017-06-01

    Several new platforms have been developed to experimentally measure hydrodynamic instabilities in all phases of indirect-drive, inertial confinement fusion implosions on National Ignition Facility. At the ablation front, instability growth of pre-imposed modulations was measured with a face-on, x-ray radiography platform in the linear regime using the Hydrodynamic Growth Radiography (HGR) platform. Modulation growth of "native roughness" modulations and engineering features (fill tubes and capsule support membranes) were measured in conditions relevant to layered DT implosions. A new experimental platform was developed to measure instability growth at the ablator-ice interface. In the deceleration phase of implosions, several experimental platforms were developed to measure both low-mode asymmetries and high-mode perturbations near peak compression with x-ray and nuclear techniques. In one innovative technique, the self-emission from the hot spot was enhanced with argon dopant to "self-backlight" the shell in-flight. To stabilize instability growth, new "adiabat-shaping" techniques were developed using the HGR platform and applied in layered DT implosions.

  6. The point explosion with radiation transport

    NASA Astrophysics Data System (ADS)

    Lin, Zhiwei; Zhang, Lu; Kuang, Longyu; Jiang, Shaoen

    2017-10-01

    Some amount of energy is released instantaneously at the origin to generate simultaneously a spherical radiative heat wave and a spherical shock wave in the point explosion with radiation transport, which is a complicated problem due to the competition between these two waves. The point explosion problem possesses self-similar solutions when only hydrodynamic motion or only heat conduction is considered, which are Sedov solution and Barenblatt solution respectively. The point explosion problem wherein both physical mechanisms of hydrodynamic motion and heat conduction are included has been studied by P. Reinicke and A.I. Shestakov. In this talk we numerically investigate the point explosion problem wherein both physical mechanisms of hydrodynamic motion and radiation transport are taken into account. The radiation transport equation in one dimensional spherical geometry has to be solved for this problem since the ambient medium is optically thin with respect to the initially extremely high temperature at the origin. The numerical results reveal a high compression of medium and a bi-peak structure of density, which are further theoretically analyzed at the end.

  7. Mix and hydrodynamic instabilities on NIF

    DOE PAGES

    Smalyuk, V. A.; Robey, H. F.; Casey, D. T.; ...

    2017-06-01

    Several new platforms have been developed to experimentally measure hydrodynamic instabilities in all phases of indirect-drive, inertial confinement fusion implosions on National Ignition Facility. At the ablation front, instability growth of pre-imposed modulations was measured with a face-on, x-ray radiography platform in the linear regime using the Hydrodynamic Growth Radiography (HGR) platform. Modulation growth of "native roughness" modulations and engineering features (fill tubes and capsule support membranes) were measured in conditions relevant to layered DT implosions. A new experimental platform was developed to measure instability growth at the ablator-ice interface. Here in the deceleration phase of implosions, several experimental platformsmore » were developed to measure both low-mode asymmetries and high-mode perturbations near peak compression with x-ray and nuclear techniques. In one innovative technique, the self-emission from the hot spot was enhanced with argon dopant to "self-backlight" the shell in-flight. To stabilize instability growth, new "adiabat-shaping" techniques were developed using the HGR platform and applied in layered DT implosions.« less

  8. Calculation of three-dimensional compressible laminar and turbulent boundary flows. Three-dimensional compressible boundary layers of reacting gases over realistic configurations

    NASA Technical Reports Server (NTRS)

    Kendall, R. M.; Bonnett, W. S.; Nardo, C. T.; Abbett, M. J.

    1975-01-01

    A three-dimensional boundary-layer code was developed for particular application to realistic hypersonic aircraft. It is very general and can be applied to a wide variety of boundary-layer flows. Laminar, transitional, and fully turbulent flows of compressible, reacting gases are efficiently calculated by use of the code. A body-oriented orthogonal coordinate system is used for the calculation and the user has complete freedom in specifying the coordinate system within the restrictions that one coordinate must be normal to the surface and the three coordinates must be mutually orthogonal.

  9. The physics of long- and intermediate-wavelength asymmetries of the hot spot: Compression hydrodynamics and energetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, A.; Betti, R.; Shvarts, D.

    To achieve ignition with inertial confinement fusion (ICF), it is important to under- stand the effect of asymmetries on the hydrodynamics and energetics of the compres- sion. This paper describes a theoretical model for the compression of distorted hot spots, and quantitative estimates using hydrodynamic simulations. The asymmetries are categorized into low (Ι < 6) and intermediate (Ι < A < 40) modes by comparison of the wavelength with the thermal-diffusion scale length. Long-wavelength modes introduce substantial nonradial motion, whereas intermediate-wavelength modes in- volve more cooling by thermal ablation. We discover that for distorted hot spots, the measured neutron-averaged propertiesmore » can be very different from the real hydro- dynamic conditions. This is because mass ablation driven my thermal conduction introduces flows in the Rayleigh–Taylor bubbles, this results in pressure variation, in addition to temperature variation between the bubbles and the neutron-producing region (~1 keV for intermediate modes). The differences are less pronounced for long-wavelength asymmetries since the bubbles are relatively hot and sustain fusion reactions. The yield degradation$-$ with respect to the symmetric$-$ results primarily from a reduction in the hot-spot pressure for low modes and from a reduction in burn volume for intermediate modes. It is shown that the degradation in internal energy of the hot-spot is equivalent for both categories, and is equal to the total residual energy in the shell including the bubbles. This quantity is correlated with the shell residual kinetic energy for low-modes, and includes the kinetic energy in the bubbles for mid-modes.« less

  10. The physics of long- and intermediate-wavelength asymmetries of the hot spot: Compression hydrodynamics and energetics

    DOE PAGES

    Bose, A.; Betti, R.; Shvarts, D.; ...

    2017-10-03

    To achieve ignition with inertial confinement fusion (ICF), it is important to under- stand the effect of asymmetries on the hydrodynamics and energetics of the compres- sion. This paper describes a theoretical model for the compression of distorted hot spots, and quantitative estimates using hydrodynamic simulations. The asymmetries are categorized into low (Ι < 6) and intermediate (Ι < A < 40) modes by comparison of the wavelength with the thermal-diffusion scale length. Long-wavelength modes introduce substantial nonradial motion, whereas intermediate-wavelength modes in- volve more cooling by thermal ablation. We discover that for distorted hot spots, the measured neutron-averaged propertiesmore » can be very different from the real hydro- dynamic conditions. This is because mass ablation driven my thermal conduction introduces flows in the Rayleigh–Taylor bubbles, this results in pressure variation, in addition to temperature variation between the bubbles and the neutron-producing region (~1 keV for intermediate modes). The differences are less pronounced for long-wavelength asymmetries since the bubbles are relatively hot and sustain fusion reactions. The yield degradation$-$ with respect to the symmetric$-$ results primarily from a reduction in the hot-spot pressure for low modes and from a reduction in burn volume for intermediate modes. It is shown that the degradation in internal energy of the hot-spot is equivalent for both categories, and is equal to the total residual energy in the shell including the bubbles. This quantity is correlated with the shell residual kinetic energy for low-modes, and includes the kinetic energy in the bubbles for mid-modes.« less

  11. Modelling multi-phase liquid-sediment scour and resuspension induced by rapid flows using Smoothed Particle Hydrodynamics (SPH) accelerated with a Graphics Processing Unit (GPU)

    NASA Astrophysics Data System (ADS)

    Fourtakas, G.; Rogers, B. D.

    2016-06-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is applied to two-phase liquid-sediments flows. The absence of a mesh in SPH is ideal for interfacial and highly non-linear flows with changing fragmentation of the interface, mixing and resuspension. The rheology of sediment induced under rapid flows undergoes several states which are only partially described by previous research in SPH. This paper attempts to bridge the gap between the geotechnics, non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer which are needed to predict accurately the global erosion phenomena, from a hydrodynamics prospective. The numerical SPH scheme is based on the explicit treatment of both phases using Newtonian and the non-Newtonian Bingham-type Herschel-Bulkley-Papanastasiou constitutive model. This is supplemented by the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been compared with experimental and 2-D reference numerical models for scour following a dry-bed dam break yielding satisfactory results and improvements over well-known SPH multi-phase models. With 3-D simulations requiring a large number of particles, the code is accelerated with a graphics processing unit (GPU) in the open-source DualSPHysics code. The implementation and optimisation of the code achieved a speed up of x58 over an optimised single thread serial code. A 3-D dam break over a non-cohesive erodible bed simulation with over 4 million particles yields close agreement with experimental scour and water surface profiles.

  12. Nonlinear Multiscale Transformations: From Synchronization to Error Control

    DTIC Science & Technology

    2001-07-01

    transformation (plus the quantization step) has taken place, a lossless Lempel - Ziv compression algorithm is applied to reduce the size of the transformed... compressed data are all very close, however the visual quality of the reconstructed image is significantly better for the EC compression algorithm ...used in recent times in the first step of transform coding algorithms for image compression . Ideally, a multiscale transformation allows for an

  13. Fractal-Based Image Compression, II

    DTIC Science & Technology

    1990-06-01

    data for figure 3 ----------------------------------- 10 iv 1. INTRODUCTION The need for data compression is not new. With humble beginnings such as...the use of acronyms and abbreviations in spoken and written word, the methods for data compression became more advanced as the need for information...grew. The Morse code, developed because of the need for faster telegraphy, was an early example of a data compression technique. Largely because of the

  14. Equalizing resolution in smoothed-particle hydrodynamics calculations using self-adaptive sinc kernels

    NASA Astrophysics Data System (ADS)

    García-Senz, Domingo; Cabezón, Rubén M.; Escartín, José A.; Ebinger, Kevin

    2014-10-01

    Context. The smoothed-particle hydrodynamics (SPH) technique is a numerical method for solving gas-dynamical problems. It has been applied to simulate the evolution of a wide variety of astrophysical systems. The method has a second-order accuracy, with a resolution that is usually much higher in the compressed regions than in the diluted zones of the fluid. Aims: We propose and check a method to balance and equalize the resolution of SPH between high- and low-density regions. This method relies on the versatility of a family of interpolators called sinc kernels, which allows increasing the interpolation quality by varying only a single parameter (the exponent of the sinc function). Methods: The proposed method was checked and validated through a number of numerical tests, from standard one-dimensional Riemann problems in shock tubes, to multidimensional simulations of explosions, hydrodynamic instabilities, and the collapse of a Sun-like polytrope. Results: The analysis of the hydrodynamical simulations suggests that the scheme devised to equalize the accuracy improves the treatment of the post-shock regions and, in general, of the rarefacted zones of fluids while causing no harm to the growth of hydrodynamic instabilities. The method is robust and easy to implement with a low computational overload. It conserves mass, energy, and momentum and reduces to the standard SPH scheme in regions of the fluid that have smooth density gradients.

  15. Free-Lagrange methods for compressible hydrodynamics in two space dimensions

    NASA Astrophysics Data System (ADS)

    Crowley, W. E.

    1985-03-01

    Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.

  16. Indirect drive ignition at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meezan, N. B.; Edwards, M. J.; Hurricane, O. A.

    This article reviews scientific results from the pursuit of indirect drive ignition on the National Ignition Facility (NIF) and describes the program's forward looking research directions. In indirect drive on the NIF, laser beams heat an x-ray enclosure called a hohlraum that surrounds a spherical pellet. X-ray radiation ablates the surface of the pellet, imploding a thin shell of deuterium/tritium (DT) that must accelerate to high velocity (v > 350 km s -1) and compress by a factor of several thousand. Since 2009, substantial progress has been made in understanding the major challenges to ignition: Rayleigh Taylor (RT) instability seededmore » by target imperfections; and low-mode asymmetries in the hohlraum x-ray drive, exacerbated by laser-plasma instabilities (LPI). Requirements on velocity, symmetry, and compression have been demonstrated separately on the NIF but have not been achieved simultaneously. We now know that the RT instability, seeded mainly by the capsule support tent, severely degraded DT implosions from 2009–2012. Experiments using a 'high-foot' drive with demonstrated lower RT growth improved the thermonuclear yield by a factor of 10, resulting in yield amplification due to alpha particle heating by more than a factor of 2. However, large time dependent drive asymmetry in the LPI-dominated hohlraums remains unchanged, preventing further improvements. High fidelity 3D hydrodynamic calculations explain these results. In conclusion, future research efforts focus on improved capsule mounting techniques and on hohlraums with little LPI and controllable symmetry. In parallel, we are pursuing improvements to the basic physics models used in the design codes through focused physics experiments.« less

  17. Indirect drive ignition at the National Ignition Facility

    DOE PAGES

    Meezan, N. B.; Edwards, M. J.; Hurricane, O. A.; ...

    2016-10-27

    This article reviews scientific results from the pursuit of indirect drive ignition on the National Ignition Facility (NIF) and describes the program's forward looking research directions. In indirect drive on the NIF, laser beams heat an x-ray enclosure called a hohlraum that surrounds a spherical pellet. X-ray radiation ablates the surface of the pellet, imploding a thin shell of deuterium/tritium (DT) that must accelerate to high velocity (v > 350 km s -1) and compress by a factor of several thousand. Since 2009, substantial progress has been made in understanding the major challenges to ignition: Rayleigh Taylor (RT) instability seededmore » by target imperfections; and low-mode asymmetries in the hohlraum x-ray drive, exacerbated by laser-plasma instabilities (LPI). Requirements on velocity, symmetry, and compression have been demonstrated separately on the NIF but have not been achieved simultaneously. We now know that the RT instability, seeded mainly by the capsule support tent, severely degraded DT implosions from 2009–2012. Experiments using a 'high-foot' drive with demonstrated lower RT growth improved the thermonuclear yield by a factor of 10, resulting in yield amplification due to alpha particle heating by more than a factor of 2. However, large time dependent drive asymmetry in the LPI-dominated hohlraums remains unchanged, preventing further improvements. High fidelity 3D hydrodynamic calculations explain these results. In conclusion, future research efforts focus on improved capsule mounting techniques and on hohlraums with little LPI and controllable symmetry. In parallel, we are pursuing improvements to the basic physics models used in the design codes through focused physics experiments.« less

  18. CONVECTION THEORY AND SUB-PHOTOSPHERIC STRATIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnett, David; Meakin, Casey; Young, Patrick A., E-mail: darnett@as.arizona.ed, E-mail: casey.meakin@gmail.co, E-mail: patrick.young.1@asu.ed

    2010-02-20

    As a preliminary step toward a complete theoretical integration of three-dimensional compressible hydrodynamic simulations into stellar evolution, convection at the surface and sub-surface layers of the Sun is re-examined, from a restricted point of view, in the language of mixing-length theory (MLT). Requiring that MLT use a hydrodynamically realistic dissipation length gives a new constraint on solar models. While the stellar structure which results is similar to that obtained by Yale Rotational Evolution Code (Guenther et al.; Bahcall and Pinsonneault) and Garching models (Schlattl et al.), the theoretical picture differs. A new quantitative connection is made between macro-turbulence, micro-turbulence, andmore » the convective velocity scale at the photosphere, which has finite values. The 'geometric parameter' in MLT is found to correspond more reasonably with the thickness of the superadiabatic region (SAR), as it must for consistency in MLT, and its integrated effect may correspond to that of the strong downward plumes which drive convection (Stein and Nordlund), and thus has a physical interpretation even in MLT. If we crudely require the thickness of the SAR to be consistent with the 'geometric factor' used in MLT, there is no longer a free parameter, at least in principle. Use of three-dimensional simulations of both adiabatic convection and stellar atmospheres will allow the determination of the dissipation length and the geometric parameter (i.e., the entropy jump) more realistically, and with no astronomical calibration. A physically realistic treatment of convection in stellar evolution will require substantial additional modifications beyond MLT, including nonlocal effects of kinetic energy flux, entrainment (the most dramatic difference from MLT found by Meakin and Arnett), rotation, and magnetic fields.« less

  19. Hot-spot mix in ignition-scale implosions on the NIF [Hot-spot mix in ignition-scale implosions on the National Ignition Facility (NIF)

    DOE PAGES

    Regan, S. P.; Epstein, R.; Hammel, B. A.; ...

    2012-03-30

    Ignition of an inertial confinement fusion (ICF) target depends on the formation of a central hot spot with sufficient temperature and areal density. Radiative and conductive losses from the hot spot can be enhanced by hydrodynamic instabilities. The concentric spherical layers of current National Ignition Facility (NIF) ignition targets consist of a plastic ablator surrounding 2 a thin shell of cryogenic thermonuclear fuel (i.e., hydrogen isotopes), with fuel vapor filling the interior volume. The Rev. 5 ablator is doped with Ge to minimize preheat of the ablator closest to the DT ice caused by Au M-band emission from the hohlraummore » x-ray drive. Richtmyer–Meshkov and Rayleigh–Taylor hydrodynamic instabilities seeded by high-mode (50 < t < 200) ablator-surface perturbations can cause Ge-doped ablator to mix into the interior of the shell at the end of the acceleration phase. As the shell decelerates, it compresses the fuel vapor, forming a hot spot. K-shell line emission from the ionized Ge that has penetrated into the hot spot provides an experimental signature of hot-spot mix. The Ge emission from tritium–hydrogen–deuterium (THD) and DT cryogenic targets and gas-filled plastic shell capsules, which replace the THD layer with a massequivalent CH layer, was examined. The inferred amount of hot-spot mix mass, estimated from the Ge K-shell line brightness using a detailed atomic physics code, is typically below the 75 ng allowance for hot-spot mix. Furthermore, predictions of a simple mix model, based on linear growth of the measured surface-mass modulations, are consistent with the experimental results.« less

  20. Hot-spot mix in ignition-scale implosions on the NIF [Hot-spot mix in ignition-scale implosions on the National Ignition Facility (NIF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regan, S. P.; Epstein, R.; Hammel, B. A.

    Ignition of an inertial confinement fusion (ICF) target depends on the formation of a central hot spot with sufficient temperature and areal density. Radiative and conductive losses from the hot spot can be enhanced by hydrodynamic instabilities. The concentric spherical layers of current National Ignition Facility (NIF) ignition targets consist of a plastic ablator surrounding 2 a thin shell of cryogenic thermonuclear fuel (i.e., hydrogen isotopes), with fuel vapor filling the interior volume. The Rev. 5 ablator is doped with Ge to minimize preheat of the ablator closest to the DT ice caused by Au M-band emission from the hohlraummore » x-ray drive. Richtmyer–Meshkov and Rayleigh–Taylor hydrodynamic instabilities seeded by high-mode (50 < t < 200) ablator-surface perturbations can cause Ge-doped ablator to mix into the interior of the shell at the end of the acceleration phase. As the shell decelerates, it compresses the fuel vapor, forming a hot spot. K-shell line emission from the ionized Ge that has penetrated into the hot spot provides an experimental signature of hot-spot mix. The Ge emission from tritium–hydrogen–deuterium (THD) and DT cryogenic targets and gas-filled plastic shell capsules, which replace the THD layer with a massequivalent CH layer, was examined. The inferred amount of hot-spot mix mass, estimated from the Ge K-shell line brightness using a detailed atomic physics code, is typically below the 75 ng allowance for hot-spot mix. Furthermore, predictions of a simple mix model, based on linear growth of the measured surface-mass modulations, are consistent with the experimental results.« less

  1. 75 FR 32519 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-08

    ... (Compressed and Liquefied Gases), under NAICS code 325120 (Industrial Gases Manufacturing). On March 23, 2010...), under NAICS code 325120 (Industrial Gases Manufacturing). Dated: June 1, 2010. Karen Hontz, Director... Propane Gas (LPG), North American Industry Classification System (NAICS) code 325120, Product Service Code...

  2. Subband/transform functions for image processing

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.

  3. Vertical Object Layout and Compression for Fixed Heaps

    NASA Astrophysics Data System (ADS)

    Titzer, Ben L.; Palsberg, Jens

    Research into embedded sensor networks has placed increased focus on the problem of developing reliable and flexible software for microcontroller-class devices. Languages such as nesC [10] and Virgil [20] have brought higher-level programming idioms to this lowest layer of software, thereby adding expressiveness. Both languages are marked by the absence of dynamic memory allocation, which removes the need for a runtime system to manage memory. While nesC offers code modules with statically allocated fields, arrays and structs, Virgil allows the application to allocate and initialize arbitrary objects during compilation, producing a fixed object heap for runtime. This paper explores techniques for compressing fixed object heaps with the goal of reducing the RAM footprint of a program. We explore table-based compression and introduce a novel form of object layout called vertical object layout. We provide experimental results that measure the impact on RAM size, code size, and execution time for a set of Virgil programs. Our results show that compressed vertical layout has better execution time and code size than table-based compression while achieving more than 20% heap reduction on 6 of 12 benchmark programs and 2-17% heap reduction on the remaining 6. We also present a formalization of vertical object layout and prove tight relationships between three styles of object layout.

  4. Transform coding for hardware-accelerated volume rendering.

    PubMed

    Fout, Nathaniel; Ma, Kwan-Liu

    2007-01-01

    Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.

  5. Three-Dimensional Hydrodynamic Simulations of OMEGA Implosions

    NASA Astrophysics Data System (ADS)

    Igumenshchev, I. V.

    2016-10-01

    The effects of large-scale (with Legendre modes less than 30) asymmetries in OMEGA direct-drive implosions caused by laser illumination nonuniformities (beam-power imbalance and beam mispointing and mistiming) and target offset, mount, and layers nonuniformities were investigated using three-dimensional (3-D) hydrodynamic simulations. Simulations indicate that the performance degradation in cryogenic implosions is caused mainly by the target offsets ( 10 to 20 μm), beampower imbalance (σrms 10 %), and initial target asymmetry ( 5% ρRvariation), which distort implosion cores, resulting in a reduced hot-spot confinement and an increased residual kinetic energy of the stagnated target. The ion temperature inferred from the width of simulated neutron spectra are influenced by bulk fuel motion in the distorted hot spot and can result in up to 2-keV apparent temperature increase. Similar temperature variations along different lines of sight are observed. Simulated x-ray images of implosion cores in the 4- to 8-keV energy range show good agreement with experiments. Demonstrating hydrodynamic equivalence to ignition designs on OMEGA requires reducing large-scale target and laser-imposed nonuniformities, minimizing target offset, and employing high-efficient mid-adiabat (α = 4) implosion designs that mitigate cross-beam energy transfer (CBET) and suppress short-wavelength Rayleigh-Taylor growth. These simulations use a new low-noise 3-D Eulerian hydrodynamic code ASTER. Existing 3-D hydrodynamic codes for direct-drive implosions currently miss CBET and noise-free ray-trace laser deposition algorithms. ASTER overcomes these limitations using a simplified 3-D laser-deposition model, which includes CBET and is capable of simulating the effects of beam-power imbalance, beam mispointing, mistiming, and target offset. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  6. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  7. Study of Two-Dimensional Compressible Non-Acoustic Modeling of Stirling Machine Type Components

    NASA Technical Reports Server (NTRS)

    Tew, Roy C., Jr.; Ibrahim, Mounir B.

    2001-01-01

    A two-dimensional (2-D) computer code was developed for modeling enclosed volumes of gas with oscillating boundaries, such as Stirling machine components. An existing 2-D incompressible flow computer code, CAST, was used as the starting point for the project. CAST was modified to use the compressible non-acoustic Navier-Stokes equations to model an enclosed volume including an oscillating piston. The devices modeled have low Mach numbers and are sufficiently small that the time required for acoustics to propagate across them is negligible. Therefore, acoustics were excluded to enable more time efficient computation. Background information about the project is presented. The compressible non-acoustic flow assumptions are discussed. The governing equations used in the model are presented in transport equation format. A brief description is given of the numerical methods used. Comparisons of code predictions with experimental data are then discussed.

  8. End-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1982-01-01

    The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.

  9. Performance evaluation of the intra compression in the video coding standards

    NASA Astrophysics Data System (ADS)

    Abramowski, Andrzej

    2015-09-01

    The article presents a comparison of the Intra prediction algorithms in the current state-of-the-art video coding standards, including MJPEG 2000, VP8, VP9, H.264/AVC and H.265/HEVC. The effectiveness of techniques employed by each standard is evaluated in terms of compression efficiency and average encoding time. The compression efficiency is measured using BD-PSNR and BD-RATE metrics with H.265/HEVC results as an anchor. Tests are performed on a set of video sequences, composed of sequences gathered by Joint Collaborative Team on Video Coding during the development of the H.265/HEVC standard and 4K sequences provided by Ultra Video Group. According to results, H.265/HEVC provides significant bit-rate savings at the expense of computational complexity, while VP9 may be regarded as a compromise between the efficiency and required encoding time.

  10. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  11. Impact of ablator thickness and laser drive duration on a platform for supersonic, shockwave-driven hydrodynamic instability experiments

    DOE PAGES

    Wan, W. C.; Malamud, Guy; Shimony, A.; ...

    2016-12-07

    Here, we discuss changes to a target design that improved the quality and consistency of data obtained through a novel experimental platform that enables the study of hydrodynamic instabilities in a compressible regime. The experiment uses a laser to drive steady, supersonic shockwave over well-characterized initial perturbations. Early experiments were adversely affected by inadequate experimental timescales and, potentially, an unintended secondary shockwave. These issues were addressed by extending the 4 x 10 13 W/cm 2 laser pulse from 19 ns to 28 ns, and increasing the ablator thickness from 185 µm to 500 µm. We present data demonstrating the performancemore » of the platform.« less

  12. Impact of ablator thickness and laser drive duration on a platform for supersonic, shockwave-driven hydrodynamic instability experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, W. C.; Malamud, Guy; Shimony, A.

    Here, we discuss changes to a target design that improved the quality and consistency of data obtained through a novel experimental platform that enables the study of hydrodynamic instabilities in a compressible regime. The experiment uses a laser to drive steady, supersonic shockwave over well-characterized initial perturbations. Early experiments were adversely affected by inadequate experimental timescales and, potentially, an unintended secondary shockwave. These issues were addressed by extending the 4 x 10 13 W/cm 2 laser pulse from 19 ns to 28 ns, and increasing the ablator thickness from 185 µm to 500 µm. We present data demonstrating the performancemore » of the platform.« less

  13. Computer Simulation of the VASIMR Engine

    NASA Technical Reports Server (NTRS)

    Garrison, David

    2005-01-01

    The goal of this project is to develop a magneto-hydrodynamic (MHD) computer code for simulation of the VASIMR engine. This code is designed be easy to modify and use. We achieve this using the Cactus framework, a system originally developed for research in numerical relativity. Since its release, Cactus has become an extremely powerful and flexible open source framework. The development of the code will be done in stages, starting with a basic fluid dynamic simulation and working towards a more complex MHD code. Once developed, this code can be used by students and researchers in order to further test and improve the VASIMR engine.

  14. On-chip frame memory reduction using a high-compression-ratio codec in the overdrives of liquid-crystal displays

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Min, Kyeong-Yuk; Chong, Jong-Wha

    2010-11-01

    Overdrive is commonly used to reduce the liquid-crystal response time and motion blur in liquid-crystal displays (LCDs). However, overdrive requires a large frame memory in order to store the previous frame for reference. In this paper, a high-compression-ratio codec is presented to compress the image data stored in the on-chip frame memory so that only 1 Mbit of on-chip memory is required in the LCD overdrives of mobile devices. The proposed algorithm further compresses the color bitmaps and representative values (RVs) resulting from the block truncation coding (BTC). The color bitmaps are represented by a luminance bitmap, which is further reduced and reconstructed using median filter interpolation in the decoder, while the RVs are compressed using adaptive quantization coding (AQC). Interpolation and AQC can provide three-level compression, which leads to 16 combinations. Using a rate-distortion analysis, we select the three optimal schemes to compress the image data for video graphics array (VGA), wide-VGA LCD, and standard-definitionTV applications. Our simulation results demonstrate that the proposed schemes outperform interpolation BTC both in PSNR (by 1.479 to 2.205 dB) and in subjective visual quality.

  15. Adaptive Encoding for Numerical Data Compression.

    ERIC Educational Resources Information Center

    Yokoo, Hidetoshi

    1994-01-01

    Discusses the adaptive compression of computer files of numerical data whose statistical properties are not given in advance. A new lossless coding method for this purpose, which utilizes Adelson-Velskii and Landis (AVL) trees, is proposed. The method is effective to any word length. Its application to the lossless compression of gray-scale images…

  16. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  17. Dynamics of circumstellar disks. III. The case of GG Tau A

    DOE PAGES

    Nelson, Andrew F.; Marzari, Francesco

    2016-08-11

    Here, we present two-dimensional hydrodynamic simulations using the Smoothed Particle Hydrodynamic code, VINE, to model a self-gravitating binary system. We model configurations in which a circumbinary torus+disk surrounds a pair of stars in orbit around each other and a circumstellar disk surrounds each star, similar to that observed for the GG Tau A system. We assume that the disks cool as blackbodies, using rates determined independently at each location in the disk by the time dependent temperature of the photosphere there. We assume heating due to hydrodynamical processes and to radiation from the two stars, using rates approximated from amore » measure of the radiation intercepted by the disk at its photosphere.« less

  18. Interplay of Laser-Plasma Interactions and Inertial Fusion Hydrodynamics

    DOE PAGES

    Strozzi, D. J.; Bailey, D. S.; Michel, P.; ...

    2017-01-12

    The effects of laser-plasma interactions (LPI) on the dynamics of inertial confinement fusion hohlraums are investigated in this work via a new approach that self-consistently couples reduced LPI models into radiation-hydrodynamics numerical codes. The interplay between hydrodynamics and LPI—specifically stimulated Raman scatter and crossed-beam energy transfer (CBET)—mostly occurs via momentum and energy deposition into Langmuir and ion acoustic waves. This spatially redistributes energy coupling to the target, which affects the background plasma conditions and thus, modifies laser propagation. In conclusion, this model shows reduced CBET and significant laser energy depletion by Langmuir waves, which reduce the discrepancy between modeling andmore » data from hohlraum experiments on wall x-ray emission and capsule implosion shape.« less

  19. Coherent dynamic structure factors of strongly coupled plasmas: A generalized hydrodynamic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Di; Hu, GuangYue; Gong, Tao

    2016-05-15

    A generalized hydrodynamic fluctuation model is proposed to simplify the calculation of the dynamic structure factor S(ω, k) of non-ideal plasmas using the fluctuation-dissipation theorem. In this model, the kinetic and correlation effects are both included in hydrodynamic coefficients, which are considered as functions of the coupling strength (Γ) and collision parameter (kλ{sub ei}), where λ{sub ei} is the electron-ion mean free path. A particle-particle particle-mesh molecular dynamics simulation code is also developed to simulate the dynamic structure factors, which are used to benchmark the calculation of our model. A good agreement between the two different approaches confirms the reliabilitymore » of our model.« less

  20. Wireless Visual Sensor Network Resource Allocation using Cross-Layer Optimization

    DTIC Science & Technology

    2009-01-01

    Rate Compatible Punctured Convolutional (RCPC) codes for channel...vol. 44, pp. 2943–2959, November 1998. [22] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE... coding rate for H.264/AVC video compression is determined. At the data link layer, the Rate - Compatible Puctured Convolutional (RCPC) channel coding

  1. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  2. Low-rate image coding using vector quantization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makur, A.

    1990-01-01

    This thesis deals with the development and analysis of a computationally simple vector quantization image compression system for coding monochrome images at low bit rate. Vector quantization has been known to be an effective compression scheme when a low bit rate is desirable, but the intensive computation required in a vector quantization encoder has been a handicap in using it for low rate image coding. The present work shows that, without substantially increasing the coder complexity, it is indeed possible to achieve acceptable picture quality while attaining a high compression ratio. Several modifications to the conventional vector quantization coder aremore » proposed in the thesis. These modifications are shown to offer better subjective quality when compared to the basic coder. Distributed blocks are used instead of spatial blocks to construct the input vectors. A class of input-dependent weighted distortion functions is used to incorporate psychovisual characteristics in the distortion measure. Computationally simple filtering techniques are applied to further improve the decoded image quality. Finally, unique designs of the vector quantization coder using electronic neural networks are described, so that the coding delay is reduced considerably.« less

  3. Generation of interior cavity noise due to window vibration excited by turbulent flows past a generic side-view mirror

    NASA Astrophysics Data System (ADS)

    Yao, Hua-Dong; Davidson, Lars

    2018-03-01

    We investigate the interior noise caused by turbulent flows past a generic side-view mirror. A rectangular glass window is placed downstream of the mirror. The window vibration is excited by the surface pressure fluctuations and emits the interior noise in a cuboid cavity. The turbulent flows are simulated using a compressible large eddy simulation method. The window vibration and interior noise are predicted with a finite element method. The wavenumber-frequency spectra of the surface pressure fluctuations are analyzed. The spectra are identified with some new features that cannot be explained by the Chase model for turbulent boundary layers. The spectra contain a minor hydrodynamic domain in addition to the hydrodynamic domain caused by the main convection of the turbulent boundary layer. The minor domain results from the local convection of the recirculating flow. These domains are formed in bent elliptic shapes. The spanwise expansion of the wake is found causing the bending. Based on the wavenumber-frequency relationships in the spectra, the surface pressure fluctuations are decomposed into hydrodynamic and acoustic components. The acoustic component is more efficient in the generation of the interior noise than the hydrodynamic component. However, the hydrodynamic component is still dominant at low frequencies below approximately 250 Hz since it has low transmission losses near the hydrodynamic critical frequency of the window. The structural modes of the window determine the low-frequency interior tonal noise. The combination of the mode shapes of the window and cavity greatly affects the magnitude distribution of the interior noise.

  4. Can MR measurement of intracranial hydrodynamics and compliance differentiate which patient with idiopathic normal pressure hydrocephalus will improve following shunt insertion?

    PubMed

    Bateman, G A; Loiselle, A M

    2007-01-01

    Between 10 and 90% of patients with normal pressure hydrocephalus (NPH) treated with a shunt will improve but they risk significant morbidity/mortality from this procedure. NPH is treated hydrodynamically and it has been assumed that a hydrodynamic difference must exist to differentiate which patient will respond. The purpose of this study is to see whether MRI hydrodynamics can differentiate which patients will improve post shunting. Thirty-two patients with NPH underwent MRI with flow quantification measuring the degree of ventricular enlargement, sulcal compression, white matter disease, total blood inflow, sagittal sinus outflow, aqueduct stroke volume, relative compliance ratio and arteriovenous delay. Patients were followed up after shunt insertion to gauge the degree of improvement and were compared with 12 age-matched controls and 12 patients with Alzheimer's disease. 63% of patients improved with insertion. The responders were identical to the non-responders in all variables. The NPH patients were significantly different to the controls (e.g. Total blood inflow reduced 20%, sagittal sinus outflow reduced 35%, aqueduct stroke volume increased 210%, relative compliance ratio reduced 60% and arteriovenous delay reduced 57% with p = 0.007, 0.03, 0.04, 0.0002 and 0.0003 respectively. The patient's with Alzheimer's disease values were midway between the NPH and control patients. Significant hydrodynamic differences were noted between NPH and controls but these were unable to differentiate the responders from non-responders. The hydrodynamics of Alzheimer's disease makes exclusion of comorbidity from this disease difficult.

  5. Sequential neural text compression.

    PubMed

    Schmidhuber, J; Heil, S

    1996-01-01

    The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.

  6. Verification testing of the compression performance of the HEVC screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng

    2017-09-01

    This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.

  7. Two-dimensional compression of surface electromyographic signals using column-correlation sorting and image encoders.

    PubMed

    Costa, Marcus V C; Carvalho, Joao L A; Berger, Pedro A; Zaghetto, Alexandre; da Rocha, Adson F; Nascimento, Francisco A O

    2009-01-01

    We present a new preprocessing technique for two-dimensional compression of surface electromyographic (S-EMG) signals, based on correlation sorting. We show that the JPEG2000 coding system (originally designed for compression of still images) and the H.264/AVC encoder (video compression algorithm operating in intraframe mode) can be used for compression of S-EMG signals. We compare the performance of these two off-the-shelf image compression algorithms for S-EMG compression, with and without the proposed preprocessing step. Compression of both isotonic and isometric contraction S-EMG signals is evaluated. The proposed methods were compared with other S-EMG compression algorithms from the literature.

  8. The numerical modelling of MHD astrophysical flows with chemistry

    NASA Astrophysics Data System (ADS)

    Kulikov, I.; Chernykh, I.; Protasov, V.

    2017-10-01

    The new code for numerical simulation of magnetic hydrodynamical astrophysical flows with consideration of chemical reactions is given in the paper. At the heart of the code - the new original low-dissipation numerical method based on a combination of operator splitting approach and piecewise-parabolic method on the local stencil. The chemodynamics of the hydrogen while the turbulent formation of molecular clouds is modeled.

  9. VizieR Online Data Catalog: FARGO_THORIN 1.0 hydrodynamic code (Chrenko+, 2017)

    NASA Astrophysics Data System (ADS)

    Chrenko, O.; Broz, M.; Lambrechts, M.

    2017-07-01

    This archive contains the source files, documentation and example simulation setups of the FARGO_THORIN 1.0 hydrodynamic code. The program was introduced, described and used for simulations in the paper. It is built on top of the FARGO code (Masset, 2000A&AS..141..165M, Baruteau & Masset, 2008ApJ...672.1054B) and it is also interfaced with the REBOUND integrator package (Rein & Liu, 2012A&A...537A.128R). THORIN stands for Two-fluid HydrOdynamics, the Rebound integrator Interface and Non-isothermal gas physics. The program is designed for self-consistent investigations of protoplanetary systems consisting of a gas disk, a disk of small solid particles (pebbles) and embedded protoplanets. Code features: I) Non-isothermal gas disk with implicit numerical solution of the energy equation. The implemented energy source terms are: Compressional heating, viscous heating, stellar irradiation, vertical escape of radiation, radiative diffusion in the midplane and radiative feedback to accretion heating of protoplanets. II) Planets evolved in 3D, with close encounters allowed. The orbits are integrated using the IAS15 integrator (Rein & Spiegel, 2015MNRAS.446.1424R). The code detects the collisions among planets and resolve them as mergers. III) Refined treatment of the planet-disk gravitational interaction. The code uses a vertical averaging of the gravitational potential, as outlined in Muller & Kley (2012A&A...539A..18M). IV) Pebble disk represented by an Eulerian, presureless and inviscid fluid. The pebble dynamics is affected by the Epstein gas drag and optionally by the diffusive effects. We also implemented the drag back-reaction term into the Navier-Stokes equation for the gas. Archive summary: ------------------------------------------------------------------------- directory/file Explanation ------------------------------------------------------------------------- /in_relax Contains setup of the first example simulation /in_wplanet Contains setup of the second example simulation /srcmain Contains the source files of FARGOTHORIN /src_reb Contains the source files of the REBOUND integrator package to be linked with THORIN GUNGPL3 GNU General Public License, version 3 LICENSE License agreement README Simple user's guide UserGuide.pdf Extended user's guide refman.pdf Programer's guide ----------------------------------------------------------------------------- (1 data file).

  10. Buoyancy instability of homologous implosions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, B. M.

    2015-06-15

    With this study, I consider the hydrodynamic stability of imploding ideal gases as an idealized model for inertial confinement fusion capsules, sonoluminescent bubbles and the gravitational collapse of astrophysical gases. For oblate modes (short-wavelength incompressive modes elongated in the direction of the mean flow), a second-order ordinary differential equation is derived that can be used to assess the stability of any time-dependent flow with planar, cylindrical or spherical symmetry. Upon further restricting the analysis to homologous flows, it is shown that a monatomic gas is governed by the Schwarzschild criterion for buoyant stability. Under buoyantly unstable conditions, both entropy andmore » vorticity fluctuations experience power-law growth in time, with a growth rate that depends upon mean flow gradients and, in the absence of dissipative effects, is independent of mode number. If the flow accelerates throughout the implosion, oblate modes amplify by a factor (2C) |N0|ti, where C is the convergence ratio of the implosion, N 0 is the initial buoyancy frequency and t i is the implosion time scale. If, instead, the implosion consists of a coasting phase followed by stagnation, oblate modes amplify by a factor exp(π|N 0|t s), where N 0 is the buoyancy frequency at stagnation and t s is the stagnation time scale. Even under stable conditions, vorticity fluctuations grow due to the conservation of angular momentum as the gas is compressed. For non-monatomic gases, this additional growth due to compression results in weak oscillatory growth under conditions that would otherwise be buoyantly stable; this over-stability is consistent with the conservation of wave action in the fluid frame. The above analytical results are verified by evolving the complete set of linear equations as an initial value problem, and it is demonstrated that oblate modes are the fastest-growing modes and that high mode numbers are required to reach this limit (Legendre mode ℓ ≳ 100 for spherical flows). Finally, comparisons are made with a Lagrangian hydrodynamics code, and it is found that a numerical resolution of ~30 zones per wavelength is required to capture these solutions accurately. This translates to an angular resolution of ~(12/ℓ)°, or ≲ 0.1° to resolve the fastest-growing modes.« less

  11. The human genome contracts again.

    PubMed

    Pavlichin, Dmitri S; Weissman, Tsachy; Yona, Golan

    2013-09-01

    The number of human genomes that have been sequenced completely for different individuals has increased rapidly in recent years. Storing and transferring complete genomes between computers for the purpose of applying various applications and analysis tools will soon become a major hurdle, hindering the analysis phase. Therefore, there is a growing need to compress these data efficiently. Here, we describe a technique to compress human genomes based on entropy coding, using a reference genome and known Single Nucleotide Polymorphisms (SNPs). Furthermore, we explore several intrinsic features of genomes and information in other genomic databases to further improve the compression attained. Using these methods, we compress James Watson's genome to 2.5 megabytes (MB), improving on recent work by 37%. Similar compression is obtained for most genomes available from the 1000 Genomes Project. Our biologically inspired techniques promise even greater gains for genomes of lower organisms and for human genomes as more genomic data become available. Code is available at sourceforge.net/projects/genomezip/

  12. Analysis of tractable distortion metrics for EEG compression applications.

    PubMed

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cárdenas-Barrera, Julián; Cruz-Roldán, Fernando

    2012-07-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio.

  13. Physicsdesign point for a 1MW fusion neutron source

    NASA Astrophysics Data System (ADS)

    Woodruff, Simon; Melnik, Paul; Sieck, Paul; Stuber, James; Romero-Talamas, Carlos; O'Bryan, John; Miller, Ronald

    2016-10-01

    We are developing a design point for a spheromak experiment heated by adiabatic compression for use as a compact neutron source. We utilize the CORSICA and NIMROD MHD codes as well as analytic modeling to assess a concept with target parameters R0 =0.5m, Rf =0.17m, T0 =1keV, Tf =8keV, n0 =2e20m-3 and nf = 5e21m-3, with radial convergence of C =R0/Rf =3. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. We present results simulations of magnetic compression using the NIMROD code to examine the role of rotation on the stability and confinement of the spheromak as it is compressed. Supported by DARPA Grant N66001-14-1-4044 and IAEA CRP on Compact Fusion Neutron Sources.

  14. A new approach of objective quality evaluation on JPEG2000 lossy-compressed lung cancer CT images

    NASA Astrophysics Data System (ADS)

    Cai, Weihua; Tan, Yongqiang; Zhang, Jianguo

    2007-03-01

    Image compression has been used to increase the communication efficiency and storage capacity. JPEG 2000 compression, based on the wavelet transformation, has its advantages comparing to other compression methods, such as ROI coding, error resilience, adaptive binary arithmetic coding and embedded bit-stream. However it is still difficult to find an objective method to evaluate the image quality of lossy-compressed medical images so far. In this paper, we present an approach to evaluate the image quality by using a computer aided diagnosis (CAD) system. We selected 77 cases of CT images, bearing benign and malignant lung nodules with confirmed pathology, from our clinical Picture Archiving and Communication System (PACS). We have developed a prototype of CAD system to classify these images into benign ones and malignant ones, the performance of which was evaluated by the receiver operator characteristics (ROC) curves. We first used JPEG 2000 to compress these cases of images with different compression ratio from lossless to lossy, and used the CAD system to classify the cases with different compressed ratio, then compared the ROC curves from the CAD classification results. Support vector machine (SVM) and neural networks (NN) were used to classify the malignancy of input nodules. In each approach, we found that the area under ROC (AUC) decreases with the increment of compression ratio with small fluctuations.

  15. A systematic comparison of two-equation Reynolds-averaged Navier-Stokes turbulence models applied to shock-cloud interactions

    NASA Astrophysics Data System (ADS)

    Goodson, Matthew D.; Heitsch, Fabian; Eklund, Karl; Williams, Virginia A.

    2017-07-01

    Turbulence models attempt to account for unresolved dynamics and diffusion in hydrodynamical simulations. We develop a common framework for two-equation Reynolds-averaged Navier-Stokes turbulence models, and we implement six models in the athena code. We verify each implementation with the standard subsonic mixing layer, although the level of agreement depends on the definition of the mixing layer width. We then test the validity of each model into the supersonic regime, showing that compressibility corrections can improve agreement with experiment. For models with buoyancy effects, we also verify our implementation via the growth of the Rayleigh-Taylor instability in a stratified medium. The models are then applied to the ubiquitous astrophysical shock-cloud interaction in three dimensions. We focus on the mixing of shock and cloud material, comparing results from turbulence models to high-resolution simulations (up to 200 cells per cloud radius) and ensemble-averaged simulations. We find that the turbulence models lead to increased spreading and mixing of the cloud, although no two models predict the same result. Increased mixing is also observed in inviscid simulations at resolutions greater than 100 cells per radius, which suggests that the turbulent mixing begins to be resolved.

  16. Density-Functional-Theory-Based Equation-of-State Table of Beryllium for Inertial Confinement Fusion Applications

    NASA Astrophysics Data System (ADS)

    Ding, Y. H.; Hu, S. X.

    2017-10-01

    Beryllium has been considered a superior ablator material for inertial confinement fusion target designs. Based on density-functional-theory calculations, we have established a wide-range beryllium equation-of-state (EOS) table of density ρ = 0.001 to ρ = 500 g/cm3 and temperature T = 2000 to 108 K. Our first-principles equation-of-state (FPEOS) table is in better agreement with widely used SESAMEEOS table (SESAME2023) than the average-atom INFERNOmodel and the Purgatoriomodel. For the principal Hugoniot, our FPEOS prediction shows 10% stiffer behavior than the last two models at maximum compression. Comparisons between FPEOS and SESAMEfor off-Hugoniot conditions show that both the pressure and internal energy differences are within 20% between two EOS tables. By implementing the FPEOS table into the 1-D radiation-hydrodynamics code LILAC, we studied the EOS effects on beryllium target-shell implosions. The FPEOS simulation predicts up to an 15% higher neutron yield compared to the simulation using the SESAME2023 EOS table. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  17. A real-time chirp-coded imaging system with tissue attenuation compensation.

    PubMed

    Ramalli, A; Guidi, F; Boni, E; Tortoli, P

    2015-07-01

    In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a novel attenuation compensation scheme, which only requires to shift the demodulation frequency by 1 MHz. The proposed method characterizes for its simplicity and easy implementation. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Power-on performance predictions for a complete generic hypersonic vehicle configuration

    NASA Technical Reports Server (NTRS)

    Bennett, Bradford C.

    1991-01-01

    The Compressible Navier-Stokes (CNS) code was developed to compute external hypersonic flow fields. It has been applied to various hypersonic external flow applications. Here, the CNS code was modified to compute hypersonic internal flow fields. Calculations were performed on a Mach 18 sidewall compression inlet and on the Lewis Mach 5 inlet. The use of the ARC3D diagonal algorithm was evaluated for internal flows on the Mach 5 inlet flow. The initial modifications to the CNS code involved generalization of the boundary conditions and the addition of viscous terms in the second crossflow direction and modifications to the Baldwin-Lomax turbulence model for corner flows.

  19. Stabilization of high-compression, indirect-drive inertial confinement fusion implosions using a 4-shock adiabat-shaped drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacPhee, A. G.; Peterson, J. L.; Casey, D. T.

    Hydrodynamic instabilities and poor fuel compression are major factors for capsule performance degradation in ignition experiments on the National Ignition Facility. Using a recently developed laser drive profile with a decaying first shock to tune the ablative Richtmyer-Meshkov (ARM) instability and subsequent in-flight Rayleigh-Taylor growth, we have demonstrated reduced growth compared to the standard ignition pulse whilst maintaining conditions for a low fuel adiabat needed for increased compression. Using in-flight x-ray radiography of pre-machined modulations, the first growth measurements using this new ARM-tuned drive have demonstrated instability growth reduction of ∼4× compared to the original design at a convergence ratiomore » of ∼2. Corresponding simulations give a fuel adiabat of ∼1.6, similar to the original goal and consistent with ignition requirements.« less

  20. Stabilization of high-compression, indirect-drive inertial confinement fusion implosions using a 4-shock adiabat-shaped drive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacPhee, A. G.; Peterson, J. L.; Casey, D. T.

    Hydrodynamic instabilities and poor fuel compression are major factors for capsule performance degradation in ignition experiments on the National Ignition Facility. Using a recently developed laser drive profile with a decaying first shock to tune the ablative Richtmyer-Meshkov (ARM) instability and subsequent in-flight Rayleigh-Taylor growth, we have demonstrated reduced growth compared to the standard ignition pulse whilst maintaining conditions for a low fuel adiabat needed for increased compression. Here, using in-flight x-ray radiography of pre-machined modulations, the first growth measurements using this new ARM-tuned drive have demonstrated instability growth reduction of ~4× compared to the original design at a convergencemore » ratio of ~2. Corresponding simulations give a fuel adiabat of ~1.6, similar to the original goal and consistent with ignition requirements.« less

  1. Stabilization of high-compression, indirect-drive inertial confinement fusion implosions using a 4-shock adiabat-shaped drive

    DOE PAGES

    MacPhee, A. G.; Peterson, J. L.; Casey, D. T.; ...

    2015-08-01

    Hydrodynamic instabilities and poor fuel compression are major factors for capsule performance degradation in ignition experiments on the National Ignition Facility. Using a recently developed laser drive profile with a decaying first shock to tune the ablative Richtmyer-Meshkov (ARM) instability and subsequent in-flight Rayleigh-Taylor growth, we have demonstrated reduced growth compared to the standard ignition pulse whilst maintaining conditions for a low fuel adiabat needed for increased compression. Here, using in-flight x-ray radiography of pre-machined modulations, the first growth measurements using this new ARM-tuned drive have demonstrated instability growth reduction of ~4× compared to the original design at a convergencemore » ratio of ~2. Corresponding simulations give a fuel adiabat of ~1.6, similar to the original goal and consistent with ignition requirements.« less

  2. Shear waves in inhomogeneous, compressible fluids in a gravity field.

    PubMed

    Godin, Oleg A

    2014-03-01

    While elastic solids support compressional and shear waves, waves in ideal compressible fluids are usually thought of as compressional waves. Here, a class of acoustic-gravity waves is studied in which the dilatation is identically zero, and the pressure and density remain constant in each fluid particle. These shear waves are described by an exact analytic solution of linearized hydrodynamics equations in inhomogeneous, quiescent, inviscid, compressible fluids with piecewise continuous parameters in a uniform gravity field. It is demonstrated that the shear acoustic-gravity waves also can be supported by moving fluids as well as quiescent, viscous fluids with and without thermal conductivity. Excitation of a shear-wave normal mode by a point source and the normal mode distortion in realistic environmental models are considered. The shear acoustic-gravity waves are likely to play a significant role in coupling wave processes in the ocean and atmosphere.

  3. AFRESh: an adaptive framework for compression of reads and assembled sequences with random access functionality.

    PubMed

    Paridaens, Tom; Van Wallendael, Glenn; De Neve, Wesley; Lambert, Peter

    2017-05-15

    The past decade has seen the introduction of new technologies that lowered the cost of genomic sequencing increasingly. We can even observe that the cost of sequencing is dropping significantly faster than the cost of storage and transmission. The latter motivates a need for continuous improvements in the area of genomic data compression, not only at the level of effectiveness (compression rate), but also at the level of functionality (e.g. random access), configurability (effectiveness versus complexity, coding tool set …) and versatility (support for both sequenced reads and assembled sequences). In that regard, we can point out that current approaches mostly do not support random access, requiring full files to be transmitted, and that current approaches are restricted to either read or sequence compression. We propose AFRESh, an adaptive framework for no-reference compression of genomic data with random access functionality, targeting the effective representation of the raw genomic symbol streams of both reads and assembled sequences. AFRESh makes use of a configurable set of prediction and encoding tools, extended by a Context-Adaptive Binary Arithmetic Coding scheme (CABAC), to compress raw genetic codes. To the best of our knowledge, our paper is the first to describe an effective implementation CABAC outside of its' original application. By applying CABAC, the compression effectiveness improves by up to 19% for assembled sequences and up to 62% for reads. By applying AFRESh to the genomic symbols of the MPEG genomic compression test set for reads, a compression gain is achieved of up to 51% compared to SCALCE, 42% compared to LFQC and 44% compared to ORCOM. When comparing to generic compression approaches, a compression gain is achieved of up to 41% compared to GNU Gzip and 22% compared to 7-Zip at the Ultra setting. Additionaly, when compressing assembled sequences of the Human Genome, a compression gain is achieved up to 34% compared to GNU Gzip and 16% compared to 7-Zip at the Ultra setting. A Windows executable version can be downloaded at https://github.com/tparidae/AFresh . tom.paridaens@ugent.be. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Impact Ignition of Liquid Propellants

    DTIC Science & Technology

    1992-04-30

    attributed the initiation to a hydrodynamic phenomenon: the impact of a high- speed microjet formed by the collapsing cavity. and suggested that the jet was...heated by shock compression. Recent work has demonstrated hot-spots formed at absorbing centres after laser irradiation of secondary explosives (Ng...detonator containing a secondary explosive initiated by a laser pulse. CavitY collapse has been studied for many%, years to explain the cavitation

  5. A modified JPEG-LS lossless compression method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Deng, Lihua; Huang, Zhenghua

    2015-12-01

    As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.

  6. Impact on a Compressible Fluid

    NASA Technical Reports Server (NTRS)

    Egorov, L. T.

    1958-01-01

    Upon impact of a solid body on the plane surface of a fluid, there occurs on the vetted surface of the body an abrupt pressure rise which propagates into both media with the speed of sound. Below, we assume the case where the speed of propagation of sound in the body which falls on the surface of the fluid may be regarded as infinitely large in comparison with the speed of propagation of sound in the fluid; that is, we shall assume that the falling body is absolutely rigid. IN this case, the entire relative speed of the motion which takes place at the beginning of the impact is absorbed by the fluid. The hydrodynamic pressures arising thereby are propagated from the contact surface within the fluid with the speed of sound in the form of compression and expansion waves and are gradually damped. After this, they are dispersed like impact pressures, reach ever larger regions of the fluid remote fran the body and became equal to zero; in the fluid there remain hydrodynamic pressures corresponding to the motion of the body after the impact. Neglecting the forces of viscosity and taking into account, furthermore, that the motion of the fluid begins from a state of rest, according to Thomson's theorem, we may consider the motion of an ideal compressible fluid in the process of impact to be potential. We examine the case of impact upon the surface of a ccmpressible fluid of a flat plate of infinite extent or of a body, the immersed part of the surface of which may be called approximately flat. In this report we discuss the first phase of the impact pressure on the surface of a fluid, prior to the appearance of a cavity, since at this stage the hydrodynamic pressures reach their maximum values. Observations, after the fall of the bodies on the surface of the fluid, show that the free surface of the fluid at this stage is almost completely at rest if one does not take into account the small rise in the neighborhood of the boundaries of the impact surface.

  7. Multi-D Full Boltzmann Neutrino Hydrodynamic Simulations in Core Collapse Supernovae and their detailed comparison with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Nagakura, Hiroki; Richers, Sherwood; Ott, Christian; Iwakami, Wakana; Furusawa, Shun; Sumiyoshi, Kohsuke; Yamada, Shoichi

    2017-01-01

    We have developed a multi-d radiation-hydrodynamic code which solves first-principles Boltzmann equation for neutrino transport. It is currently applicable specifically for core-collapse supernovae (CCSNe), but we will extend their applicability to further extreme phenomena such as black hole formation and coalescence of double neutron stars. In this meeting, I will discuss about two things; (1) detailed comparison with a Monte-Carlo neutrino transport (2) axisymmetric CCSNe simulations. The project (1) gives us confidence of our code. The Monte-Carlo code has been developed by Caltech group and it is specialized to obtain a steady state. Among CCSNe community, this is the first attempt to compare two different methods for multi-d neutrino transport. I will show the result of these comparison. For the project (2), I particularly focus on the property of neutrino distribution function in the semi-transparent region where only first-principle Boltzmann solver can appropriately handle the neutrino transport. In addition to these analyses, I will also discuss the ``explodability'' by neutrino heating mechanism.

  8. Plasma viscosity with mass transport in spherical inertial confinement fusion implosion simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vold, E. L.; Molvig, K.; Joglekar, A. S.

    2015-11-15

    The effects of viscosity and small-scale atomic-level mixing on plasmas in inertial confinement fusion (ICF) currently represent challenges in ICF research. Many current ICF hydrodynamic codes ignore the effects of viscosity though recent research indicates viscosity and mixing by classical transport processes may have a substantial impact on implosion dynamics. We have implemented a Lagrangian hydrodynamic code in one-dimensional spherical geometry with plasma viscosity and mass transport and including a three temperature model for ions, electrons, and radiation treated in a gray radiation diffusion approximation. The code is used to study ICF implosion differences with and without plasma viscosity andmore » to determine the impacts of viscosity on temperature histories and neutron yield. It was found that plasma viscosity has substantial impacts on ICF shock dynamics characterized by shock burn timing, maximum burn temperatures, convergence ratio, and time history of neutron production rates. Plasma viscosity reduces the need for artificial viscosity to maintain numerical stability in the Lagrangian formulation and also modifies the flux-limiting needed for electron thermal conduction.« less

  9. Plasma viscosity with mass transport in spherical inertial confinement fusion implosion simulations

    DOE PAGES

    Vold, Erik Lehman; Joglekar, Archis S.; Ortega, Mario I.; ...

    2015-11-20

    The effects of viscosity and small-scale atomic-level mixing on plasmas in inertial confinement fusion(ICF) currently represent challenges in ICF research. Many current ICF hydrodynamic codes ignore the effects of viscosity though recent research indicates viscosity and mixing by classical transport processes may have a substantial impact on implosion dynamics. In this paper, we have implemented a Lagrangian hydrodynamic code in one-dimensional spherical geometry with plasmaviscosity and mass transport and including a three temperature model for ions, electrons, and radiation treated in a gray radiation diffusion approximation. The code is used to study ICF implosion differences with and without plasmaviscosity andmore » to determine the impacts of viscosity on temperature histories and neutron yield. It was found that plasmaviscosity has substantial impacts on ICF shock dynamics characterized by shock burn timing, maximum burn temperatures, convergence ratio, and time history of neutron production rates. Finally, plasmaviscosity reduces the need for artificial viscosity to maintain numerical stability in the Lagrangian formulation and also modifies the flux-limiting needed for electron thermal conduction.« less

  10. GPUs, a New Tool of Acceleration in CFD: Efficiency and Reliability on Smoothed Particle Hydrodynamics Methods

    PubMed Central

    Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.

    2011-01-01

    Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185

  11. A Gamma-Ray Burst Model Via Compressional Heating of Binary Neutron Stars

    NASA Astrophysics Data System (ADS)

    Salmonson, J. D.; Wilson, J. R.; Mathews, G. J.

    1998-12-01

    We present a model for gamma-ray bursts based on the compression of neutron stars in close binary systems. General relativistic (GR) simulations of close neutron star binaries have found compression of the neutron stars estimated to produce 1053 ergs of thermal neutrinos on a timescale of seconds. The hot neutron stars will emit neutrino pairs which will partially recombine to form 1051 to 1052 ergs of electron-positron (e^-e^+) pair plasma. GR hydrodynamic computational modeling of the e^-e^+ plasma flow and recombination yield a gamma-ray burst in good agreement with general characteristics (duration ~10 seconds, spectrum peak energy ~100 keV, total energy ~1051 ergs) of many observed gamma-ray bursts.

  12. Efficient Prediction Structures for H.264 Multi View Coding Using Temporal Scalability

    NASA Astrophysics Data System (ADS)

    Guruvareddiar, Palanivel; Joseph, Biju K.

    2014-03-01

    Prediction structures with "disposable view components based" hierarchical coding have been proven to be efficient for H.264 multi view coding. Though these prediction structures along with the QP cascading schemes provide superior compression efficiency when compared to the traditional IBBP coding scheme, the temporal scalability requirements of the bit stream could not be met to the fullest. On the other hand, a fully scalable bit stream, obtained by "temporal identifier based" hierarchical coding, provides a number of advantages including bit rate adaptations and improved error resilience, but lacks in compression efficiency when compared to the former scheme. In this paper it is proposed to combine the two approaches such that a fully scalable bit stream could be realized with minimal reduction in compression efficiency when compared to state-of-the-art "disposable view components based" hierarchical coding. Simulation results shows that the proposed method enables full temporal scalability with maximum BDPSNR reduction of only 0.34 dB. A novel method also has been proposed for the identification of temporal identifier for the legacy H.264/AVC base layer packets. Simulation results also show that this enables the scenario where the enhancement views could be extracted at a lower frame rate (1/2nd or 1/4th of base view) with average extraction time for a view component of only 0.38 ms.

  13. Universal Noiseless Coding Subroutines

    NASA Technical Reports Server (NTRS)

    Schlutsmeyer, A. P.; Rice, R. F.

    1986-01-01

    Software package consists of FORTRAN subroutines that perform universal noiseless coding and decoding of integer and binary data strings. Purpose of this type of coding to achieve data compression in sense that coded data represents original data perfectly (noiselessly) while taking fewer bits to do so. Routines universal because they apply to virtually any "real-world" data source.

  14. MR Elastography Can Be Used to Measure Brain Stiffness Changes as a Result of Altered Cranial Venous Drainage During Jugular Compression.

    PubMed

    Hatt, A; Cheng, S; Tan, K; Sinkus, R; Bilston, L E

    2015-10-01

    Compressing the internal jugular veins can reverse ventriculomegaly in the syndrome of inappropriately low pressure acute hydrocephalus, and it has been suggested that this works by "stiffening" the brain tissue. Jugular compression may also alter blood and CSF flow in other conditions. We aimed to understand the effect of jugular compression on brain tissue stiffness and CSF flow. The head and neck of 9 healthy volunteers were studied with and without jugular compression. Brain stiffness (shear modulus) was measured by using MR elastography. Phase-contrast MR imaging was used to measure CSF flow in the cerebral aqueduct and blood flow in the neck. The shear moduli of the brain tissue increased with the percentage of blood draining through the internal jugular veins during venous compression. Peak velocity of caudally directed CSF in the aqueduct increased significantly with jugular compression (P < .001). The mean jugular venous flow rate, amplitude, and vessel area were significantly reduced with jugular compression, while cranial arterial flow parameters were unaffected. Jugular compression influences cerebral CSF hydrodynamics in healthy subjects and can increase brain tissue stiffness, but the magnitude of the stiffening depends on the percentage of cranial blood draining through the internal jugular veins during compression—that is, subjects who maintain venous drainage through the internal jugular veins during jugular compression have stiffer brains than those who divert venous blood through alternative pathways. These methods may be useful for studying this phenomenon in patients with the syndrome of inappropriately low-pressure acute hydrocephalus and other conditions. © 2015 by American Journal of Neuroradiology.

  15. A hybrid-drive nonisobaric-ignition scheme for inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, X. T., E-mail: xthe@iapcm.ac.cn; Center for Applied Physics and Technology, HEDPS, Peking University, Beijing 100871; IFSA Collaborative Innovation Center of MoE, Shanghai Jiao-Tong University, Shanghai 200240

    A new hybrid-drive (HD) nonisobaric ignition scheme of inertial confinement fusion (ICF) is proposed, in which a HD pressure to drive implosion dynamics increases via increasing density rather than temperature in the conventional indirect drive (ID) and direct drive (DD) approaches. In this HD (combination of ID and DD) scheme, an assembled target of a spherical hohlraum and a layered deuterium-tritium capsule inside is used. The ID lasers first drive the shock to perform a spherical symmetry implosion and produce a large-scale corona plasma. Then, the DD lasers, whose critical surface in ID corona plasma is far from the radiationmore » ablation front, drive a supersonic electron thermal wave, which slows down to a high-pressure electron compression wave, like a snowplow, piling up the corona plasma into high density and forming a HD pressurized plateau with a large width. The HD pressure is several times the conventional ID and DD ablation pressure and launches an enhanced precursor shock and a continuous compression wave, which give rise to the HD capsule implosion dynamics in a large implosion velocity. The hydrodynamic instabilities at imploding capsule interfaces are suppressed, and the continuous HD compression wave provides main pdV work large enough to hotspot, resulting in the HD nonisobaric ignition. The ignition condition and target design based on this scheme are given theoretically and by numerical simulations. It shows that the novel scheme can significantly suppress implosion asymmetry and hydrodynamic instabilities of current isobaric hotspot ignition design, and a high-gain ICF is promising.« less

  16. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, A.; Kollarits, Richard V.; Haskell, Barry G.

    1995-10-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outilne the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  17. Compression of stereoscopic video using MPEG-2

    NASA Astrophysics Data System (ADS)

    Puri, Atul; Kollarits, Richard V.; Haskell, Barry G.

    1995-12-01

    Many current as well as emerging applications in areas of entertainment, remote operations, manufacturing industry and medicine can benefit from the depth perception offered by stereoscopic video systems which employ two views of a scene imaged under the constraints imposed by human visual system. Among the many challenges to be overcome for practical realization and widespread use of 3D/stereoscopic systems are good 3D displays and efficient techniques for digital compression of enormous amounts of data while maintaining compatibility with normal video decoding and display systems. After a brief introduction to the basics of 3D/stereo including issues of depth perception, stereoscopic 3D displays and terminology in stereoscopic imaging and display, we present an overview of tools in the MPEG-2 video standard that are relevant to our discussion on compression of stereoscopic video, which is the main topic of this paper. Next, we outline the various approaches for compression of stereoscopic video and then focus on compatible stereoscopic video coding using MPEG-2 Temporal scalability concepts. Compatible coding employing two different types of prediction structures become potentially possible, disparity compensated prediction and combined disparity and motion compensated predictions. To further improve coding performance and display quality, preprocessing for reducing mismatch between the two views forming stereoscopic video is considered. Results of simulations performed on stereoscopic video of normal TV resolution are then reported comparing the performance of two prediction structures with the simulcast solution. It is found that combined disparity and motion compensated prediction offers the best performance. Results indicate that compression of both views of stereoscopic video of normal TV resolution appears feasible in a total of 6 to 8 Mbit/s. We then discuss regarding multi-viewpoint video, a generalization of stereoscopic video. Finally, we describe ongoing efforts within MPEG-2 to define a profile for stereoscopic video coding, as well as, the promise of MPEG-4 in addressing coding of multi-viewpoint video.

  18. CoGI: Towards Compressing Genomes as an Image.

    PubMed

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  19. FBCOT: a fast block coding option for JPEG 2000

    NASA Astrophysics Data System (ADS)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically < 0.5dB). The algorithm provides only limited quality/SNR scalability, but offers truly reversible transcoding to/from any standard JPEG 2000 block bit-stream. The proposed FAST block coder can be used with EBCOT's post-compression RD-optimization methodology, allowing a target compressed bit-rate to be achieved even at low latencies, leading to the name FBCOT (Fast Block Coding with Optimized Truncation).

  20. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  1. 3D video coding: an overview of present and upcoming standards

    NASA Astrophysics Data System (ADS)

    Merkle, Philipp; Müller, Karsten; Wiegand, Thomas

    2010-07-01

    An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.

  2. HEVC for high dynamic range services

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew

    2015-09-01

    Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.

  3. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  4. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  5. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  6. Evaluating nuclear physics inputs in core-collapse supernova models

    NASA Astrophysics Data System (ADS)

    Lentz, E.; Hix, W. R.; Baird, M. L.; Messer, O. E. B.; Mezzacappa, A.

    Core-collapse supernova models depend on the details of the nuclear and weak interaction physics inputs just as they depend on the details of the macroscopic physics (transport, hydrodynamics, etc.), numerical methods, and progenitors. We present preliminary results from our ongoing comparison studies of nuclear and weak interaction physics inputs to core collapse supernova models using the spherically-symmetric, general relativistic, neutrino radiation hydrodynamics code Agile-Boltztran. We focus on comparisons of the effects of the nuclear EoS and the effects of improving the opacities, particularly neutrino--nucleon interactions.

  7. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    DOE PAGES

    Hakel, Peter

    2016-10-01

    Here we report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  8. FESTR: Finite-Element Spectral Transfer of Radiation spectroscopic modeling and analysis code

    NASA Astrophysics Data System (ADS)

    Hakel, Peter

    2016-10-01

    We report on the development of a new spectral postprocessor of hydrodynamic simulations of hot, dense plasmas. Based on given time histories of one-, two-, and three-dimensional spatial distributions of materials, and their local temperature and density conditions, spectroscopically-resolved signals are computed. The effects of radiation emission and absorption by the plasma on the emergent spectra are simultaneously taken into account. This program can also be used independently of hydrodynamic calculations to analyze available experimental data with the goal of inferring plasma conditions.

  9. A contourlet transform based algorithm for real-time video encoding

    NASA Astrophysics Data System (ADS)

    Katsigiannis, Stamos; Papaioannou, Georgios; Maroulis, Dimitris

    2012-06-01

    In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.

  10. Transient hydrodynamic finite-size effects in simulations under periodic boundary conditions

    NASA Astrophysics Data System (ADS)

    Asta, Adelchi J.; Levesque, Maximilien; Vuilleumier, Rodolphe; Rotenberg, Benjamin

    2017-06-01

    We use lattice-Boltzmann and analytical calculations to investigate transient hydrodynamic finite-size effects induced by the use of periodic boundary conditions. These effects are inevitable in simulations at the molecular, mesoscopic, or continuum levels of description. We analyze the transient response to a local perturbation in the fluid and obtain the local velocity correlation function via linear response theory. This approach is validated by comparing the finite-size effects on the steady-state velocity with the known results for the diffusion coefficient. We next investigate the full time dependence of the local velocity autocorrelation function. We find at long times a crossover between the expected t-3 /2 hydrodynamic tail and an oscillatory exponential decay, and study the scaling with the system size of the crossover time, exponential rate and amplitude, and oscillation frequency. We interpret these results from the analytic solution of the compressible Navier-Stokes equation for the slowest modes, which are set by the system size. The present work not only provides a comprehensive analysis of hydrodynamic finite-size effects in bulk fluids, which arise regardless of the level of description and simulation algorithm, but also establishes the lattice-Boltzmann method as a suitable tool to investigate such effects in general.

  11. Biomechanics of Tetrahymena escaping from a dead end

    PubMed Central

    Kikuchi, Kenji

    2018-01-01

    Understanding the behaviours of swimming microorganisms in various environments is important for understanding cell distribution and growth in nature and industry. However, cell behaviour in complex geometries is largely unknown. In this study, we used Tetrahymena thermophila as a model microorganism and experimentally investigated cell behaviour between two flat plates with a small angle. In this configuration, the geometry provided a ‘dead end' line where the two flat plates made contact. The results showed that cells tended to escape from the dead end line more by hydrodynamics than by a biological reaction. In the case of hydrodynamic escape, the cell trajectories were symmetric as they swam to and from the dead end line. Near the dead end line, T. thermophila cells were compressed between the two flat plates while cilia kept beating with reduced frequency; those cells again showed symmetric trajectories, although the swimming velocity decreased. These behaviours were well reproduced by our computational model based on biomechanics. The mechanism of hydrodynamic escape can be understood in terms of the torque balance induced by lubrication flow. We therefore conclude that a cell's escape from the dead end was assisted by hydrodynamics. These findings pave the way for understanding cell behaviour and distribution in complex geometries. PMID:29491169

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scalemore » projects such as ICF3D.« less

  13. Potential end-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1978-01-01

    Various communication systems were considered which are required to transmit both imaging and a typically error sensitive, class of data called general science/engineering (gse) over a Gaussian channel. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an Advanced Imaging Communication System (AICS) which exhibits the rather significant potential advantages of sophisticated data compression coupled with powerful yet practical channel coding.

  14. Progress Towards a Rad-Hydro Code for Modern Computing Architectures LA-UR-10-02825

    NASA Astrophysics Data System (ADS)

    Wohlbier, J. G.; Lowrie, R. B.; Bergen, B.; Calef, M.

    2010-11-01

    We are entering an era of high performance computing where data movement is the overwhelming bottleneck to scalable performance, as opposed to the speed of floating-point operations per processor. All multi-core hardware paradigms, whether heterogeneous or homogeneous, be it the Cell processor, GPGPU, or multi-core x86, share this common trait. In multi-physics applications such as inertial confinement fusion or astrophysics, one may be solving multi-material hydrodynamics with tabular equation of state data lookups, radiation transport, nuclear reactions, and charged particle transport in a single time cycle. The algorithms are intensely data dependent, e.g., EOS, opacity, nuclear data, and multi-core hardware memory restrictions are forcing code developers to rethink code and algorithm design. For the past two years LANL has been funding a small effort referred to as Multi-Physics on Multi-Core to explore ideas for code design as pertaining to inertial confinement fusion and astrophysics applications. The near term goals of this project are to have a multi-material radiation hydrodynamics capability, with tabular equation of state lookups, on cartesian and curvilinear block structured meshes. In the longer term we plan to add fully implicit multi-group radiation diffusion and material heat conduction, and block structured AMR. We will report on our progress to date.

  15. Vectorization, threading, and cache-blocking considerations for hydrocodes on emerging architectures

    DOE PAGES

    Fung, J.; Aulwes, R. T.; Bement, M. T.; ...

    2015-07-14

    This work reports on considerations for improving computational performance in preparation for current and expected changes to computer architecture. The algorithms studied will include increasingly complex prototypes for radiation hydrodynamics codes, such as gradient routines and diffusion matrix assembly (e.g., in [1-6]). The meshes considered for the algorithms are structured or unstructured meshes. The considerations applied for performance improvements are meant to be general in terms of architecture (not specifically graphical processing unit (GPUs) or multi-core machines, for example) and include techniques for vectorization, threading, tiling, and cache blocking. Out of a survey of optimization techniques on applications such asmore » diffusion and hydrodynamics, we make general recommendations with a view toward making these techniques conceptually accessible to the applications code developer. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.« less

  16. Numerical Tests and Properties of Waves in Radiating Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, B M; Klein, R I

    2009-09-03

    We discuss the properties of an analytical solution for waves in radiating fluids, with a view towards its implementation as a quantitative test of radiation hydrodynamics codes. A homogeneous radiating fluid in local thermodynamic equilibrium is periodically driven at the boundary of a one-dimensional domain, and the solution describes the propagation of the waves thus excited. Two modes are excited for a given driving frequency, generally referred to as a radiative acoustic wave and a radiative diffusion wave. While the analytical solution is well known, several features are highlighted here that require care during its numerical implementation. We compare themore » solution in a wide range of parameter space to a numerical integration with a Lagrangian radiation hydrodynamics code. Our most significant observation is that flux-limited diffusion does not preserve causality for waves on a homogeneous background.« less

  17. Hydrodynamic Studies of Turbulent AGN Tori

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Meisenheimer, K.; Klahr, H.; Camenzind, M.; Wolf, S.; Henning, Th.; Burkert, A.; Krause, M.

    Recently, the MID-infrared Interferometric instrument (MIDI) at the VLTI has shown that dust tori in the two nearby Seyfert galaxies NGC 1068 and the Circinus galaxy are geometrically thick and can be well described by a thin, warm central disk, surrounded by a colder and fluffy torus component. By carrying out hydrodynamical simulations with the help of the TRAMP code (Klahr et al. 1999), we follow the evolution of a young nuclear star cluster in terms of discrete mass-loss and energy injection from stellar processes. This naturally leads to a filamentary large scale torus component, where cold gas is able to flow radially inwards. The filaments join into a dense and very turbulent disk structure. In a post-processing step, we calculate spectral energy distributions and images with the 3D radiative transfer code MC3D Wolf (2003) and compare them to observations. Turbulence in the dense disk component is investigated in a separate project.

  18. Quasi 1D Modeling of Mixed Compression Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph W.; Paxson, Daniel E.; Woolwine, Kyle J.

    2012-01-01

    The AeroServoElasticity task under the NASA Supersonics Project is developing dynamic models of the propulsion system and the vehicle in order to conduct research for integrated vehicle dynamic performance. As part of this effort, a nonlinear quasi 1-dimensional model of the 2-dimensional bifurcated mixed compression supersonic inlet is being developed. The model utilizes computational fluid dynamics for both the supersonic and subsonic diffusers. The oblique shocks are modeled utilizing compressible flow equations. This model also implements variable geometry required to control the normal shock position. The model is flexible and can also be utilized to simulate other mixed compression supersonic inlet designs. The model was validated both in time and in the frequency domain against the legacy LArge Perturbation INlet code, which has been previously verified using test data. This legacy code written in FORTRAN is quite extensive and complex in terms of the amount of software and number of subroutines. Further, the legacy code is not suitable for closed loop feedback controls design, and the simulation environment is not amenable to systems integration. Therefore, a solution is to develop an innovative, more simplified, mixed compression inlet model with the same steady state and dynamic performance as the legacy code that also can be used for controls design. The new nonlinear dynamic model is implemented in MATLAB Simulink. This environment allows easier development of linear models for controls design for shock positioning. The new model is also well suited for integration with a propulsion system model to study inlet/propulsion system performance, and integration with an aero-servo-elastic system model to study integrated vehicle ride quality, vehicle stability, and efficiency.

  19. Spherical-shell boundaries for two-dimensional compressible convection in a star

    NASA Astrophysics Data System (ADS)

    Pratt, J.; Baraffe, I.; Goffrey, T.; Geroux, C.; Viallet, M.; Folini, D.; Constantino, T.; Popov, M.; Walder, R.

    2016-10-01

    Context. Studies of stellar convection typically use a spherical-shell geometry. The radial extent of the shell and the boundary conditions applied are based on the model of the star investigated. We study the impact of different two-dimensional spherical shells on compressible convection. Realistic profiles for density and temperature from an established one-dimensional stellar evolution code are used to produce a model of a large stellar convection zone representative of a young low-mass star, like our sun at 106 years of age. Aims: We analyze how the radial extent of the spherical shell changes the convective dynamics that result in the deep interior of the young sun model, far from the surface. In the near-surface layers, simple small-scale convection develops from the profiles of temperature and density. A central radiative zone below the convection zone provides a lower boundary on the convection zone. The inclusion of either of these physically distinct layers in the spherical shell can potentially affect the characteristics of deep convection. Methods: We perform hydrodynamic implicit large eddy simulations of compressible convection using the MUltidimensional Stellar Implicit Code (MUSIC). Because MUSIC has been designed to use realistic stellar models produced from one-dimensional stellar evolution calculations, MUSIC simulations are capable of seamlessly modeling a whole star. Simulations in two-dimensional spherical shells that have different radial extents are performed over tens or even hundreds of convective turnover times, permitting the collection of well-converged statistics. Results: To measure the impact of the spherical-shell geometry and our treatment of boundaries, we evaluate basic statistics of the convective turnover time, the convective velocity, and the overshooting layer. These quantities are selected for their relevance to one-dimensional stellar evolution calculations, so that our results are focused toward studies exploiting the so-called 321D link. We find that the inclusion in the spherical shell of the boundary between the radiative and convection zones decreases the amplitude of convective velocities in the convection zone. The inclusion of near-surface layers in the spherical shell can increase the amplitude of convective velocities, although the radial structure of the velocity profile established by deep convection is unchanged. The impact of including the near-surface layers depends on the speed and structure of small-scale convection in the near-surface layers. Larger convective velocities in the convection zone result in a commensurate increase in the overshooting layer width and a decrease in the convective turnover time. These results provide support for non-local aspects of convection.

  20. Compression of surface myoelectric signals using MP3 encoding.

    PubMed

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  1. Controlling dynamics of imploded core plasma for fast ignition

    NASA Astrophysics Data System (ADS)

    Nagatomo, H.; Johzaki, T.; Sunahara, A.; Shiraga, H.; Sakagami, H.; Cai, H.; Mima, K.

    2010-08-01

    In the Fast ignition, formation of highly compressed core plasma is one of critical issue. In this work, the effect hydrodynamic instability in cone-guided shell implosion is studied. Two-dimensional radiation hydrodynamic simulations are carried out where realistic seeds of Rayleigh-Taylor instability are imposed. Preliminary results suggest that the instability reduces implosion performance, such as implosion velocity, areal density, and maximum density. In perturbed target implosion, the break-up time of the tip of the cone is earlier than that of ideal unperturbed target implosion case. This is crucial matter for the Fast ignition because the pass for the heating laser is filled with plasma before the shot of heating laser. A sophisticated implosion design of stable and low in-flight aspect ratio is necessary for cone-guided shell implosion.

  2. Nonlinear properties of small amplitude dust ion acoustic solitary waves

    NASA Astrophysics Data System (ADS)

    Ghosh, Samiran; Sarkar, S.; Khan, Manoranjan; Gupta, M. R.

    2000-09-01

    In this paper some nonlinear characteristics of small amplitude dust ion acoustic solitary wave in three component dusty plasma consisting of electrons, ions, and dust grains have been studied. Simultaneously, the charge fluctuation dynamics of the dust grains under the assumption that the dust charging time scale is much smaller than the dust hydrodynamic time scale has been considered here. The ion dust collision has also been incorporated. It has been seen that a damped Korteweg-de Vries (KdV) equation governs the nonlinear dust ion acoustic wave. The damping arises due to ion dust collision, under the assumption that the ion hydrodynamical time scale is much smaller than that of the ion dust collision. Numerical investigations reveal that the dust ion acoustic wave admits only a positive potential, i.e., compressive soliton.

  3. Anomalous-hydrodynamic analysis of charge-dependent elliptic flow in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Hongo, Masaru; Hirono, Yuji; Hirano, Tetsufumi

    2017-12-01

    Anomalous hydrodynamics is a low-energy effective theory that captures effects of quantum anomalies. We develop a numerical code of ideal anomalous hydrodynamics and apply it to dynamics of heavy-ion collisions, where anomalous transports are expected to occur. We discuss implications of the simulations for possible experimental observations of anomalous transport effects. From analyses of the charge-dependent elliptic flow parameters (v2±) as a function of the net charge asymmetry A±, we find that the linear dependence of Δ v2± ≡ v2- - v2+ on the net charge asymmetry A± can come from a mechanism unrelated to anomalous transport effects. Instead, we find that a finite intercept Δ v2± (A± = 0) can come from anomalous effects.

  4. A simple and efficient algorithm operating with linear time for MCEEG data compression.

    PubMed

    Titus, Geevarghese; Sudhakar, M S

    2017-09-01

    Popularisation of electroencephalograph (EEG) signals in diversified fields have increased the need for devices capable of operating at lower power and storage requirements. This has led to a great deal of research in data compression, that can address (a) low latency in the coding of the signal, (b) reduced hardware and software dependencies, (c) quantify the system anomalies, and (d) effectively reconstruct the compressed signal. This paper proposes a computationally simple and novel coding scheme named spatial pseudo codec (SPC), to achieve lossy to near lossless compression of multichannel EEG (MCEEG). In the proposed system, MCEEG signals are initially normalized, followed by two parallel processes: one operating on integer part and the other, on fractional part of the normalized data. The redundancies in integer part are exploited using spatial domain encoder, and the fractional part is coded as pseudo integers. The proposed method has been tested on a wide range of databases having variable sampling rates and resolutions. Results indicate that the algorithm has a good recovery performance with an average percentage root mean square deviation (PRD) of 2.72 for an average compression ratio (CR) of 3.16. Furthermore, the algorithm has a complexity of only O(n) with an average encoding and decoding time per sample of 0.3 ms and 0.04 ms respectively. The performance of the algorithm is comparable with recent methods like fast discrete cosine transform (fDCT) and tensor decomposition methods. The results validated the feasibility of the proposed compression scheme for practical MCEEG recording, archiving and brain computer interfacing systems.

  5. Parametric geometric model and shape optimization of an underwater glider with blended-wing-body

    NASA Astrophysics Data System (ADS)

    Sun, Chunya; Song, Baowei; Wang, Peng

    2015-11-01

    Underwater glider, as a new kind of autonomous underwater vehicles, has many merits such as long-range, extended-duration and low costs. The shape of underwater glider is an important factor in determining the hydrodynamic efficiency. In this paper, a high lift to drag ratio configuration, the Blended-Wing-Body (BWB), is used to design a small civilian under water glider. In the parametric geometric model of the BWB underwater glider, the planform is defined with Bezier curve and linear line, and the section is defined with symmetrical airfoil NACA 0012. Computational investigations are carried out to study the hydrodynamic performance of the glider using the commercial Computational Fluid Dynamics (CFD) code Fluent. The Kriging-based genetic algorithm, called Efficient Global Optimization (EGO), is applied to hydrodynamic design optimization. The result demonstrates that the BWB underwater glider has excellent hydrodynamic performance, and the lift to drag ratio of initial design is increased by 7% in the EGO process.

  6. Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick

    This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less

  7. A study of data coding technology developments in the 1980-1985 time frame, volume 2

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Shahsavari, M. M.

    1978-01-01

    The source parameters of digitized analog data are discussed. Different data compression schemes are outlined and analysis of their implementation are presented. Finally, bandwidth compression techniques are given for video signals.

  8. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  9. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. New algorithm for lossless hyper-spectral image compression with mixing transform to eliminate redundancy

    NASA Astrophysics Data System (ADS)

    Xie, ChengJun; Xu, Lin

    2008-03-01

    This paper presents a new algorithm based on mixing transform to eliminate redundancy, SHIRCT and subtraction mixing transform is used to eliminate spectral redundancy, 2D-CDF(2,2)DWT to eliminate spatial redundancy, This transform has priority in hardware realization convenience, since it can be fully implemented by add and shift operation. Its redundancy elimination effect is better than (1D+2D)CDF(2,2)DWT. Here improved SPIHT+CABAC mixing compression coding algorithm is used to implement compression coding. The experiment results show that in lossless image compression applications the effect of this method is a little better than the result acquired using (1D+2D)CDF(2,2)DWT+improved SPIHT+CABAC, still it is much better than the results acquired by JPEG-LS, WinZip, ARJ, DPCM, the research achievements of a research team of Chinese Academy of Sciences, NMST and MST. Using hyper-spectral image Canal of American JPL laboratory as the data set for lossless compression test, on the average the compression ratio of this algorithm exceeds the above algorithms by 42%,37%,35%,30%,16%,13%,11% respectively.

  11. Main drive optimization of a high-foot pulse shape in inertial confinement fusion implosions

    NASA Astrophysics Data System (ADS)

    Wang, L. F.; Ye, W. H.; Wu, J. F.; Liu, Jie; Zhang, W. Y.; He, X. T.

    2016-12-01

    While progress towards hot-spot ignition has been made achieving an alpha-heating dominated state in high-foot implosion experiments [Hurricane et al., Nat. Phys. 12, 800 (2016)] on the National Ignition Facility, improvements are needed to increase the fuel compression for the enhancement of the neutron yield. A strategy is proposed to improve the fuel compression through the recompression of a shock/compression wave generated by the end of the main drive portion of a high-foot pulse shape. Two methods for the peak pulse recompression, namely, the decompression-and-recompression (DR) and simple recompression schemes, are investigated and compared. Radiation hydrodynamic simulations confirm that the peak pulse recompression can clearly improve fuel compression without significantly compromising the implosion stability. In particular, when the convergent DR shock is tuned to encounter the divergent shock from the capsule center at a suitable position, not only the neutron yield but also the stability of stagnating hot-spot can be noticeably improved, compared to the conventional high-foot implosions [Hurricane et al., Phys. Plasmas 21, 056314 (2014)].

  12. X-ray Thomson scattering measurements of temperature and density from multi-shocked CH capsules

    DOE PAGES

    Fletcher, L. B.; Glenzer, S. H.; Kritcher, A.; ...

    2013-05-24

    Proof-of-principle measurements of the electron densities, temperatures, and ionization states of spherically compressed multi-shocked CH (polystyrene) capsules have been achieved using spectrally resolved x-ray Thomson scattering. A total energy of 13.5 kJ incident on target is used to compress a 70 μm thick CH shell above solid-mass density using three coalescing shocks. Separately, a laser-produced zinc He-α x-ray source at 9 keV delayed 200 ps-800 ps after maximum compression is used to probe the plasma in the non-collective scattering regime. The data show that x-ray Thomson scattering enables a complete description of the time-dependent hydrodynamic evolution of shock-compressed CH capsules,more » with a maximum measured density of ρ > 6 g cm –3. Additionally, the results demonstrate that accurate measurements of x-ray scattering from bound-free transitions in the CH plasma demonstrate strong evidence that continuum lowering is the primary ionization mechanism of carbon L-shell electrons.« less

  13. Detonability of turbulent white dwarf plasma: Hydrodynamical models at low densities

    NASA Astrophysics Data System (ADS)

    Fenn, Daniel

    The origins of Type Ia supernovae (SNe Ia) remain an unsolved problem of contemporary astrophysics. Decades of research indicate that these supernovae arise from thermonuclear runaway in the degenerate material of white dwarf stars; however, the mechanism of these explosions is unknown. Also, it is unclear what are the progenitors of these objects. These missing elements are vital components of the initial conditions of supernova explosions, and are essential to understanding these events. A requirement of any successful SN Ia model is that a sufficient portion of the white dwarf plasma must be brought under conditions conducive to explosive burning. Our aim is to identify the conditions required to trigger detonations in turbulent, carbon-rich degenerate plasma at low densities. We study this problem by modeling the hydrodynamic evolution of a turbulent region filled with a carbon/oxygen mixture at a density, temperature, and Mach number characteristic of conditions found in the 0.8+1.2 solar mass (CO0812) model discussed by Fenn et al. (2016). We probe the ignition conditions for different degrees of compressibility in turbulent driving. We assess the probability of successful detonations based on characteristics of the identified ignition kernels, using Eulerian and Lagrangian statistics of turbulent flow. We found that material with very short ignition times is abundant in the case that turbulence is driven compressively. This material forms contiguous structures that persist over many ignition time scales, and that we identify as prospective detonation kernels. Detailed analysis of the kernels revealed that their central regions are densely filled with material characterized by short ignition times and contain the minimum mass required for self-sustained detonations to form. It is conceivable that ignition kernels will be formed for lower compressibility in the turbulent driving. However, we found no detonation kernels in models driven 87.5 percent compressively. We indirectly confirmed the existence of the lower limit of the degree of compressibility of the turbulent drive for the formation of detonation kernels by analyzing simulation results of the He0609 model of Fenn et al. (2016), which produces a detonation in a helium-rich boundary layer. We found that the amount of energy in the compressible component of the kinetic energy in this model corresponds to about 96 percent compressibility in the turbulent drive. The fact that no detonation was found in the original CO0812 model for nominally the same problem conditions suggests that models with carbon-rich boundary layers may require higher resolution in order to adequately represent the mass distributions in terms of ignition times.

  14. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  15. Distributed single source coding with side information

    NASA Astrophysics Data System (ADS)

    Vila-Forcen, Jose E.; Koval, Oleksiy; Voloshynovskiy, Sviatoslav V.

    2004-01-01

    In the paper we advocate image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: classical image compression is considered from the positions of source coding with side information and, contrarily to the existing scenarios, where side information is given explicitly, side information is created based on deterministic approximation of local image features. We consider an image in the transform domain as a realization of a source with a bounded codebook of symbols where each symbol represents a particular edge shape. The codebook is image independent and plays the role of auxiliary source. Due to the partial availability of side information at both encoder and decoder we treat our problem as a modification of Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available only at decoder. Finally, we present a practical compression algorithm for passport photo images based on our concept that demonstrates the superior performance in very low bit rate regime.

  16. JPEG 2000 Encoding with Perceptual Distortion Control

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Liu, Zhen; Karam, Lina J.

    2008-01-01

    An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.

  17. Compression of color-mapped images

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, A. C.; Sayood, Khalid

    1992-01-01

    In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.

  18. Fast depth decision for HEVC inter prediction based on spatial and temporal correlation

    NASA Astrophysics Data System (ADS)

    Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi

    2016-07-01

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.

  19. Effect of hydrodynamic cavitation in the tissue erosion by pulsed high-intensity focused ultrasound (pHIFU).

    PubMed

    Zhou, Yufeng; Gao, Xiaobin Wilson

    2016-09-21

    High-intensity focused ultrasound (HIFU) is emerging as an effective therapeutic modality in clinics. Besides the thermal ablation, tissue disintegration is also possible because of the interaction between the distorted HIFU bursts and either bubble cloud or boiling bubble. Hydrodynamic cavitation is another type of cavitation and has been employed widely in industry, but its role in mechanical erosion to tissue is not clearly known. In this study, the bubble dynamics immediately after the termination of HIFU exposure in the transparent gel phantom was captured by high-speed photography, from which the bubble displacement towards the transducer and the changes of bubble size was quantitatively determined. The characteristics of hydrodynamic cavitation due to the release of the acoustic radiation force and relaxation of compressed surrounding medium were found to associate with the number of pulses delivered and HIFU parameters (i.e. pulse duration and pulse repetition frequency). Because of the initial big bubble (~1 mm), large bubble expansion (up to 1.76 folds), and quick bubble motion (up to ~1 m s -1 ) hydrodynamic cavitation is significant after HIFU exposure and may lead to mechanical erosion. The shielding effect of residual tiny bubbles would reduce the acoustic energy delivered to the pre-existing bubble at the focus and, subsequently, the hydrodynamic cavitation effect. Tadpole shape of mechanical erosion in ex vivo porcine kidney samples was similar to the contour of bubble dynamics in the gel. Liquefied tissue was observed to emit towards the transducer through the punctured tissue after HIFU exposure in the sonography. In summary, the release of HIFU exposure-induced hydrodynamic cavitation produces significant bubble expansion and motion, which may be another important mechanism of tissue erosion. Understanding its mechanism and optimizing the outcome would broaden and enhance HIFU applications.

  20. Effect of hydrodynamic cavitation in the tissue erosion by pulsed high-intensity focused ultrasound (pHIFU)

    NASA Astrophysics Data System (ADS)

    Zhou, Yufeng; Gao, Xiaobin Wilson

    2016-09-01

    High-intensity focused ultrasound (HIFU) is emerging as an effective therapeutic modality in clinics. Besides the thermal ablation, tissue disintegration is also possible because of the interaction between the distorted HIFU bursts and either bubble cloud or boiling bubble. Hydrodynamic cavitation is another type of cavitation and has been employed widely in industry, but its role in mechanical erosion to tissue is not clearly known. In this study, the bubble dynamics immediately after the termination of HIFU exposure in the transparent gel phantom was captured by high-speed photography, from which the bubble displacement towards the transducer and the changes of bubble size was quantitatively determined. The characteristics of hydrodynamic cavitation due to the release of the acoustic radiation force and relaxation of compressed surrounding medium were found to associate with the number of pulses delivered and HIFU parameters (i.e. pulse duration and pulse repetition frequency). Because of the initial big bubble (~1 mm), large bubble expansion (up to 1.76 folds), and quick bubble motion (up to ~1 m s-1) hydrodynamic cavitation is significant after HIFU exposure and may lead to mechanical erosion. The shielding effect of residual tiny bubbles would reduce the acoustic energy delivered to the pre-existing bubble at the focus and, subsequently, the hydrodynamic cavitation effect. Tadpole shape of mechanical erosion in ex vivo porcine kidney samples was similar to the contour of bubble dynamics in the gel. Liquefied tissue was observed to emit towards the transducer through the punctured tissue after HIFU exposure in the sonography. In summary, the release of HIFU exposure-induced hydrodynamic cavitation produces significant bubble expansion and motion, which may be another important mechanism of tissue erosion. Understanding its mechanism and optimizing the outcome would broaden and enhance HIFU applications.

Top