Science.gov

Sample records for compressible hydrodynamics codes

  1. VH-1: Multidimensional ideal compressible hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric

    2012-04-01

    VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.

  2. Pencil: Finite-difference Code for Compressible Hydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Brandenburg, Axel; Dobler, Wolfgang

    2010-10-01

    The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.

  3. Reliable estimation of shock position in shock-capturing compressible hydrodynamics codes

    SciTech Connect

    Nelson, Eric M

    2008-01-01

    The displacement method for estimating shock position in a shock-capturing compressible hydrodynamics code is introduced. Common estimates use simulation data within the captured shock, but the displacement method uses data behind the shock, making the estimate consistent with and as reliable as estimates of material parameters obtained from averages or fits behind the shock. The displacement method is described in the context of a steady shock in a one-dimensional lagrangian hydrodynamics code, and demonstrated on a piston problem and a spherical blast wave.The displacement method's estimates of shock position are much better than common estimates in such applications.

  4. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  5. CASTRO: A New AMR Radiation-Hydrodynamics Code for Compressible Astrophysics

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Bell, J.; Day, M.; Howell, L.; Joggerst, C.; Myra, E.; Nordhaus, J.; Singer, M.; Zingale, M.

    2010-01-01

    CASTRO is a new, multi-dimensional, Eulerian AMR radiation-hydrodynamics code designed for astrophysical simulations. The code includes routines for various equations of state and nuclear reaction networks, and can be used with Cartesian, cylindrical or spherical coordinates. Time integration of the hydrodynamics equations is based on a higher-order, unsplit Godunov scheme. Self-gravity can be calculated on the adaptive hierarchy using a simple monopole approximation or a full Poisson solve for the potential. CASTRO includes gray and multigroup radiation diffusion. Multi-species neutrino diffusion for supernovae is nearing completion. The adaptive framework of CASTRO is based on an time-evolving hierarchy of nested rectangular grids with refinement in both space and time; the entire implementation is designed to run on thousands of processors. We describe in more detail how CASTRO is implemented and can be used for a number of different simulations. Our initial applications of CASTRO include Type Ia and Type II supernovae. This work has been supported by the SciDAC Program of the DOE Office of Mathematics, Information, and Computational Sciences under contracts No. DE-AC02-05CH11231 (LBNL), No. DE-FC02-06ER41438 (UCSC), and No. DE-AC52-07NA27344 (LLNL); and LLNL contracts B582735 and B574691(Stony Brook). Calculations shown were carried out on Franklin at NERSC.

  6. Shadowfax: Moving mesh hydrodynamical integration code

    NASA Astrophysics Data System (ADS)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  7. TORUS: Radiation transport and hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Harries, Tim

    2014-04-01

    TORUS is a flexible radiation transfer and radiation-hydrodynamics code. The code has a basic infrastructure that includes the AMR mesh scheme that is used by several physics modules including atomic line transfer in a moving medium, molecular line transfer, photoionization, radiation hydrodynamics and radiative equilibrium. TORUS is useful for a variety of problems, including magnetospheric accretion onto T Tauri stars, spiral nebulae around Wolf-Rayet stars, discs around Herbig AeBe stars, structured winds of O supergiants and Raman-scattered line formation in symbiotic binaries, and dust emission and molecular line formation in star forming clusters. The code is written in Fortran 2003 and is compiled using a standard Gnu makefile. The code is parallelized using both MPI and OMP, and can use these parallel sections either separately or in a hybrid mode.

  8. An implicit Smooth Particle Hydrodynamic code

    SciTech Connect

    Charles E. Knapp

    2000-04-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  9. Production code control system for hydrodynamics simulations

    SciTech Connect

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration management system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.

  10. Radiation hydrodynamics integrated in the PLUTO code

    NASA Astrophysics Data System (ADS)

    Kolb, Stefan M.; Stute, Matthias; Kley, Wilhelm; Mignone, Andrea

    2013-11-01

    Aims: The transport of energy through radiation is very important in many astrophysical phenomena. In dynamical problems the time-dependent equations of radiation hydrodynamics have to be solved. We present a newly developed radiation-hydrodynamics module specifically designed for the versatile magnetohydrodynamic (MHD) code PLUTO. Methods: The solver is based on the flux-limited diffusion approximation in the two-temperature approach. All equations are solved in the co-moving frame in the frequency-independent (gray) approximation. The hydrodynamics is solved by the different Godunov schemes implemented in PLUTO, and for the radiation transport we use a fully implicit scheme. The resulting system of linear equations is solved either using the successive over-relaxation (SOR) method (for testing purposes) or using matrix solvers that are available in the PETSc library. We state in detail the methodology and describe several test cases to verify the correctness of our implementation. The solver works in standard coordinate systems, such as Cartesian, cylindrical, and spherical, and also for non-equidistant grids. Results: We present a new radiation-hydrodynamics solver coupled to the MHD-code PLUTO that is a modern, versatile, and efficient new module for treating complex radiation hydrodynamical problems in astrophysics. As test cases, either purely radiative situations, or full radiation-hydrodynamical setups (including radiative shocks and convection in accretion disks) were successfully studied. The new module scales very well on parallel computers using MPI. For problems in star or planet formation, we added the possibility of irradiation by a central source.

  11. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.

    2011-10-01

    We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.

  12. Building a Hydrodynamics Code with Kinetic Theory

    NASA Astrophysics Data System (ADS)

    Sagert, Irina; Bauer, Wolfgang; Colbry, Dirk; Pickett, Rodney; Strother, Terrance

    2013-08-01

    We report on the development of a test-particle based kinetic Monte Carlo code for large systems and its application to simulate matter in the continuum regime. Our code combines advantages of the Direct Simulation Monte Carlo and the Point-of-Closest-Approach methods to solve the collision integral of the Boltzmann equation. With that, we achieve a high spatial accuracy in simulations while maintaining computational feasibility when applying a large number of test-particles. The hybrid setup of our approach allows us to study systems which move in and out of the hydrodynamic regime, with low and high particle densities. To demonstrate our code's ability to reproduce hydrodynamic behavior we perform shock wave simulations and focus here on the Sedov blast wave test. The blast wave problem describes the evolution of a spherical expanding shock front and is an important verification problem for codes which are applied in astrophysical simulation, especially for approaches which aim to study core-collapse supernovae.

  13. EUNHA: a New Cosmological Hydrodynamic Simulation Code

    NASA Astrophysics Data System (ADS)

    Shin, Jihye; Kim, Juhan; Kim, Sungsoo S.; Park, Changbom

    2014-06-01

    We develop a parallel cosmological hydrodynamic simulation code designed for the study of formation and evolution of cosmological structures. The gravitational force is calculated using the TreePM method and the hydrodynamics is implemented based on the smoothed particle hydrodynamics. The initial displacement and velocity of simulation particles are calculated according to second-order Lagrangian perturbation theory using the power spectra of dark matter and baryonic matter. The initial background temperature is given by Recfast and the temperature fluctuations at the initial particle position are assigned according to the adiabatic model. We use a time-limiter scheme over the individual time steps to capture shock-fronts and to ease the time-step tension between the shock and preshock particles. We also include the astrophysical gas processes of radiative heating/cooling, star formation, metal enrichment, and supernova feedback. We test the code in several standard cases such as one-dimensional Riemann problems, Kelvin-Helmholtz, and Sedov blast wave instability. Star formation on the galactic disk is investigated to check whether the Schmidt-Kennicutt relation is properly recovered. We also study global star formation history at different simulation resolutions and compare them with observations.

  14. Superresonant instability of a compressible hydrodynamic vortex

    NASA Astrophysics Data System (ADS)

    Oliveira, Leandro A.; Cardoso, Vitor; Crispino, Luís C. B.

    2016-06-01

    We show that a purely circulating and compressible system, in an adiabatic regime of acoustic propagation, presents superresonant instabilities. To show the existence these instabilities, we compute the quasinormal mode frequencies of this system numerically using two different frequency domain methods.

  15. CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS

    SciTech Connect

    Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.

    2013-01-15

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  16. CASTRO: A New Compressible Astrophysical Solver. III. Multigroup Radiation Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Howell, L.; Almgren, A.; Burrows, A.; Dolence, J.; Bell, J.

    2013-01-01

    We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.

  17. Compressible Lagrangian hydrodynamics without Lagrangian cells

    NASA Astrophysics Data System (ADS)

    Clark, Robert A.

    The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.

  18. Subband Coding Methods for Seismic Data Compression

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  19. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  20. Image coding compression based on DCT

    NASA Astrophysics Data System (ADS)

    Feng, Fei; Liu, Peixue; Jiang, Baohua

    2012-04-01

    With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.

  1. KIVA reactive hydrodynamics code applied to detonations in high vacuum

    NASA Astrophysics Data System (ADS)

    Greiner, N. Roy

    1989-08-01

    The KIVA reactive hydrodynamics code was adapted for modeling detonation hydrodynamics in a high vacuum. Adiabatic cooling rapidly freezes detonation reactions as a result of free expansion into the vacuum. After further expansion, a molecular beam of the products is admitted without disturbance into a drift tube, where the products are analyzed with a mass spectrometer. How the model is used for interpretation and design of experiments for detonation chemistry is explained. Modeling of experimental hydrodynamic characterization by laser-schlieren imaging and model-aided mapping that will link chemical composition data to particular volume elements in the explosive charge are also discussed.

  2. Pulse compression using binary phase codes

    NASA Technical Reports Server (NTRS)

    Farley, D. T.

    1983-01-01

    In most MST applications pulsed radars are peak power limited and have excess average power capacity. Short pulses are required for good range resolution, but the problem of range ambiguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a technique which allows more of the transmitter average power capacity to be used without sacrificing range resolution. As the name implies, a pulse of power P and duration T is in a certain sense converted into one of power nP and duration T/n. In the frequency domain, compression involves manipulating the phases of the different frequency components of the pulse. One way to compress a pulse is via phase coding, especially binary phase coding, a technique which is particularly amenable to digital processing techniques. This method, which is used extensively in radar probing of the atmosphere and ionosphere is discussed. Barker codes, complementary and quasi-complementary code sets, and cyclic codes are addressed.

  3. Compression of polyphase codes with Doppler shift

    NASA Astrophysics Data System (ADS)

    Wirth, W. D.

    It is shown that pulse compression with sufficient Doppler tolerance may be achieved with polyphase codes derived from linear frequency modulation (LFM) and nonlinear frequency modulation (NLFM). Low sidelobes in range and Doppler are required especially for the radar search function. These may be achieved by an LFM derived phase coder together with Hamming weighting or by applying a PNL polyphase code derived from NLFM. For a discrete and known Doppler frequency with an expanded and mismatched reference vector a sidelobe reduction is possible. The compression is then achieved without a loss in resolution. A set up for the expanded reference gives zero sidelobes only in an interval around the signal peak or a least square minimization for all range elements. This version may be useful for target tracking.

  4. FARGO3D: Hydrodynamics/magnetohydrodynamics code

    NASA Astrophysics Data System (ADS)

    Benítez Llambay, Pablo; Masset, Frédéric

    2015-09-01

    A successor of FARGO (ascl:1102.017), FARGO3D is a versatile HD/MHD code that runs on clusters of CPUs or GPUs, with special emphasis on protoplanetary disks. FARGO3D offers Cartesian, cylindrical or spherical geometry; 1-, 2- or 3-dimensional calculations; and orbital advection (aka FARGO) for HD and MHD calculations. As in FARGO, a simple Runge-Kutta N-body solver may be used to describe the orbital evolution of embedded point-like objects. There is no need to know CUDA; users can develop new functions in C and have them translated to CUDA automatically to run on GPUs.

  5. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  6. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  7. A new hydrodynamics code for Type Ia supernovae

    NASA Astrophysics Data System (ADS)

    Leung, S.-C.; Chu, M.-C.; Lin, L.-M.

    2015-12-01

    A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.

  8. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  9. Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions

    NASA Astrophysics Data System (ADS)

    Kwak, Kyujin; Yang, Seungwon

    2015-08-01

    The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.

  10. Adding kinetics and hydrodynamics to the CHEETAH thermochemical code

    SciTech Connect

    Fried, L.E., Howard, W.M., Souers, P.C.

    1997-01-15

    In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. We have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.

  11. A nonlocal electron conduction model for multidimensional radiation hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Schurtz, G. P.; Nicolaï, Ph. D.; Busquet, M.

    2000-10-01

    Numerical simulation of laser driven Inertial Confinement Fusion (ICF) related experiments require the use of large multidimensional hydro codes. Though these codes include detailed physics for numerous phenomena, they deal poorly with electron conduction, which is the leading energy transport mechanism of these systems. Electron heat flow is known, since the work of Luciani, Mora, and Virmont (LMV) [Phys. Rev. Lett. 51, 1664 (1983)], to be a nonlocal process, which the local Spitzer-Harm theory, even flux limited, is unable to account for. The present work aims at extending the original formula of LMV to two or three dimensions of space. This multidimensional extension leads to an equivalent transport equation suitable for easy implementation in a two-dimensional radiation-hydrodynamic code. Simulations are presented and compared to Fokker-Planck simulations in one and two dimensions of space.

  12. External-Compression Supersonic Inlet Design Code

    NASA Technical Reports Server (NTRS)

    Slater, John W.

    2011-01-01

    A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.

  13. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  14. Ultraspectral sounder data compression using the Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia

    2007-09-01

    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  15. Compressing industrial computed tomography images by means of contour coding

    NASA Astrophysics Data System (ADS)

    Jiang, Haina; Zeng, Li

    2013-10-01

    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  16. Coding For Compression Of Low-Entropy Data

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  17. Modeling Relativistic Jets Using the Athena Hydrodynamics Code

    NASA Astrophysics Data System (ADS)

    Pauls, David; Pollack, Maxwell; Wiita, Paul

    2014-11-01

    We used the Athena hydrodynamics code (Beckwith & Stone 2011) to model early-stage two-dimensional relativistic jets as approximations to the growth of radio-loud active galactic nuclei. We analyzed variability of the radio emission by calculating fluxes from a vertical strip of zones behind a standing shock, as discussed in the accompanying poster. We found the advance speed of the jet bow shock for various input jet velocities and jet-to-ambient density ratios. Faster jets and higher jet densities produce faster shock advances. We investigated the effects of parameters such as the Courant-Friedrichs-Lewy number, the input jet velocity, and the density ratio on the stability of the simulated jet, finding that numerical instabilities grow rapidly when the CFL number is above 0.1. We found that greater jet input velocities and higher density ratios lengthen the time the jet remains stable. We also examined the effects of the boundary conditions, the CFL number, the input jet velocity, the grid resolution, and the density ratio on the premature termination of Athena code. We found that a grid of 1200 by 1000 zones allows the code to run with minimal errors, while still maintaining an adequate resolution. This work is supported by the Mentored Undergraduate Summer Experience program at TCNJ.

  18. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  19. Bit-wise arithmetic coding for data compression

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.

    1994-01-01

    This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.

  20. Compressed image transmission based on fountain codes

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Wu, Xinhong; Jiao, L. C.

    2011-11-01

    In this paper, we propose a joint source-channel coding (JSCC) scheme for image transmission over wireless channel. In the scheme, fountain codes are integrated into bit-plane coding for channel coding. Compared to traditional erasure codes for error correcting, such as Reed-Solomon codes, fountain codes are rateless and can generate sufficient symbols on the fly. Two schemes, the EEP (Equal Error Protection) scheme and the UEP (Unequal Error Protection) scheme are described in the paper. Furthermore, the UEP scheme performs better than the EEP scheme. The proposed scheme not only can adaptively adjust the length of fountain codes according to channel loss rate but also reconstruct image even on bad channel.

  1. A 2-dimensional MHD code & survey of the ``buckling'' phenomenon in cylindrical magnetic flux compression experiments

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Wang, Ganghua; Gu, Zhuowei; Computational Physics Team

    2015-11-01

    We made a 2-dimensional magneto-hydrodynamics Lagrangian code. The code handles two kinds of magnetic configuration, a (x-y) plane with z-direction magnetic field Bz and a (r-z) plane with θ-direction magnetic field Bθ. The solving of the MHD equations is split into a pure dynamical step (i.e., ideal MHD) and a diffusion step. In the diffusion step, the Joule heat is calculated with a numerical scheme based on an specific form of the Joule heat production equation, ∂eJ/∂t = ∇ . (η/μ0 º × (∇ × º)) -∂/∂t (1/2μ0 B2) , where the term ∂/∂t (1/2μ0 B2) is the magnetic field energy variation caused solely by diffusion. This scheme insures the equality of the total Joule heat produced and the total electromagnetic energy lost in the system. Material elastoplasticity is considered in the code. An external circuit is coupled to the magneto-hydrodynamics and a detonation module is also added to enhance the code's ability for simulating magnetically-driven compression experiments. As a first application, the code was utilized to simulate a cylindrical magnetic flux compression experiment. The origin of the ``buckling'' phenomenon observed in the experiment is explored.

  2. New Methods for Lossless Image Compression Using Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G.; Vitter, Jeffrey Scott

    1992-01-01

    Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…

  3. Techniques for region coding in object-based image compression

    NASA Astrophysics Data System (ADS)

    Schmalz, Mark S.

    2004-01-01

    Object-based compression (OBC) is an emerging technology that combines region segmentation and coding to produce a compact representation of a digital image or video sequence. Previous research has focused on a variety of segmentation and representation techniques for regions that comprise an image. The author has previously suggested [1] partitioning of the OBC problem into three steps: (1) region segmentation, (2) region boundary extraction and compression, and (3) region contents compression. A companion paper [2] surveys implementationally feasible techniques for boundary compression. In this paper, we analyze several strategies for region contents compression, including lossless compression, lossy VPIC, EPIC, and EBLAST compression, wavelet-based coding (e.g., JPEG-2000), as well as texture matching approaches. This paper is part of a larger study that seeks to develop highly efficient compression algorithms for still and video imagery, which would eventually support automated object recognition (AOR) and semantic lookup of images in large databases or high-volume OBC-format datastreams. Example applications include querying journalistic archives, scientific or medical imaging, surveillance image processing and target tracking, as well as compression of video for transmission over the Internet. Analysis emphasizes time and space complexity, as well as sources of reconstruction error in decompressed imagery.

  4. Wavelet based hierarchical coding scheme for radar image compression

    NASA Astrophysics Data System (ADS)

    Sheng, Wen; Jiao, Xiaoli; He, Jifeng

    2007-12-01

    This paper presents a wavelet based hierarchical coding scheme for radar image compression. Radar signal is firstly quantized to digital signal, and reorganized as raster-scanned image according to radar's repeated period frequency. After reorganization, the reformed image is decomposed to image blocks with different frequency band by 2-D wavelet transformation, each block is quantized and coded by the Huffman coding scheme. A demonstrating system is developed, showing that under the requirement of real time processing, the compression ratio can be very high, while with no significant loss of target signal in restored radar image.

  5. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  6. Streamlined Genome Sequence Compression using Distributed Source Coding

    PubMed Central

    Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel

    2014-01-01

    We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552

  7. Description of a parallel, 3D, finite element, hydrodynamics-diffusion code

    SciTech Connect

    Milovich, J L; Prasad, M K; Shestakov, A I

    1999-04-11

    We describe a parallel, 3D, unstructured grid finite element, hydrodynamic diffusion code for inertial confinement fusion (ICF) applications and the ancillary software used to run it. The code system is divided into two entities, a controller and a stand-alone physics code. The code system may reside on different computers; the controller on the user's workstation and the physics code on a supercomputer. The physics code is composed of separate hydrodynamic, equation-of-state, laser energy deposition, heat conduction, and radiation transport packages and is parallelized for distributed memory architectures. For parallelization, a SPMD model is adopted; the domain is decomposed into a disjoint collection of subdomains, one per processing element (PE). The PEs communicate using MPI. The code is used to simulate the hydrodynamic implosion of a spherical bubble.

  8. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  9. Implementation of the Turn Function Method in a three-dimensional, parallelized hydrodynamics code

    NASA Astrophysics Data System (ADS)

    Orourke, P. J.; Fairfield, M. S.

    1992-08-01

    The implementation of the Turn Function Method in KIVA-F90, a version of the KIVA computer program written in the FORTRAN 90 programming language that is used on some massively parallel computers is described. The Turn Function Method solves both linear momentum and vorticity equations in numerical calculations of compressible fluid flow. Solving a vorticity equation allows vorticity to be both conserved and transported more accurately than in traditional methods for computing compressible flow. This first implementation of the Turn Function Method in a three-dimensional hydrodynamics code involved some modification of the original method and some numerical difference approximations. In particular, a penalty method is used to keep the divergence of the computed vorticity field close to zero. Difference operators are also defined in such a way that the finite difference analog of del(del x u) = 0 is exactly satisfied. Three example problems show the increased computational cost and the accuracy to be gained by using the Turn Function Method in calculations of flows with rotational motion. Use of the Method can increase by 60 percent the computational times of the Euler equation solver in KIVA-F90, but it is concluded that this increased cost is justified by the increased accuracy.

  10. Conditional entropy coding of DCT coefficients for video compression

    NASA Astrophysics Data System (ADS)

    Sipitca, Mihai; Gillman, David W.

    2000-04-01

    We introduce conditional Huffman encoding of DCT run-length events to improve the coding efficiency of low- and medium-bit rate video compression algorithms. We condition the Huffman code for each run-length event on a classification of the current block. We classify blocks according to coding mode and signal type, which are known to the decoder, and according to energy, which the decoder must receive as side information. Our classification schemes improve coding efficiency with little or no increased running time and some increased memory use.

  11. THEHYCO-3DT: Thermal hydrodynamic code for the 3 dimensional transient calculation of advanced LMFBR core

    SciTech Connect

    Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.

    1995-09-01

    The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.

  12. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  13. Compressed data organization for high throughput parallel entropy coding

    NASA Astrophysics Data System (ADS)

    Said, Amir; Mahfoodh, Abo-Talib; Yea, Sehoon

    2015-09-01

    The difficulty of parallelizing entropy coding is increasingly limiting the data throughputs achievable in media compression. In this work we analyze what are the fundamental limitations, using finite-state-machine models for identifying the best manner of separating tasks that can be processed independently, while minimizing compression losses. This analysis confirms previous works showing that effective parallelization is feasible only if the compressed data is organized in a proper way, which is quite different from conventional formats. The proposed new formats exploit the fact that optimal compression is not affected by the arrangement of coded bits, but it goes further in exploiting the decreasing cost of data processing and memory. Additional advantages include the ability to use, within this framework, increasingly more complex data modeling techniques, and the freedom to mix different types of coding. We confirm the parallelization effectiveness using coding simulations that run on multi-core processors, and show how throughput scales with the number of cores, and analyze the additional bit-rate overhead.

  14. A Two-Dimensional Compressible Gas Flow Code

    Energy Science and Technology Software Center (ESTSC)

    1995-03-17

    F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less

  15. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  16. A seismic data compression system using subband coding

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  17. Neptune: An astrophysical smooth particle hydrodynamics code for massively parallel computer architectures

    NASA Astrophysics Data System (ADS)

    Sandalski, Stou

    Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named neptune after the Roman god of water. It is written in OpenMP parallelized C++ and OpenCL and includes octree based hydrodynamic and gravitational acceleration. The design relies on object-oriented methodologies in order to provide a flexible and modular framework that can be easily extended and modified by the user. Several pre-built scenarios for simulating collisions of polytropes and black-hole accretion are provided. The code is released under the MIT Open Source license and publicly available at http://code.google.com/p/neptune-sph/.

  18. Incompressible-compressible flows with a transient discontinuous interface using smoothed particle hydrodynamics (SPH)

    NASA Astrophysics Data System (ADS)

    Lind, S. J.; Stansby, P. K.; Rogers, B. D.

    2016-03-01

    A new two-phase incompressible-compressible Smoothed Particle Hydrodynamics (SPH) method has been developed where the interface is discontinuous in density. This is applied to water-air problems with a large density difference. The incompressible phase requires surface pressure from the compressible phase and the compressible phase requires surface velocity from the incompressible phase. Compressible SPH is used for the air phase (with the isothermal stiffened ideal gas equation of state for low Mach numbers) and divergence-free (projection based) incompressible SPH is used for the water phase, with the addition of Fickian shifting to produce sufficiently homogeneous particle distributions to enable stable, accurate, converged solutions without noise in the pressure field. Shifting is a purely numerical particle regularisation device. The interface remains a true material discontinuity at a high density ratio with continuous pressure and velocity at the interface. This approach with the physics of compressibility and incompressibility represented is novel within SPH and is validated against semi-analytical results for a two-phase elongating and oscillating water drop, analytical results for low amplitude inviscid standing waves, the Kelvin-Helmholtz instability, and a dam break problem with high interface distortion and impact on a vertical wall where experimental and other numerical results are available.

  19. A compressible Navier-Stokes code for turbulent flow modeling

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.

    1984-01-01

    An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.

  20. CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution

    NASA Astrophysics Data System (ADS)

    Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo

    2012-02-01

    CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.

  1. Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities

    SciTech Connect

    Edwards, J M; Robey, H; Mackinnon, A

    2001-06-29

    Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics.

  2. Hyperspectral pixel classification from coded-aperture compressive imaging

    NASA Astrophysics Data System (ADS)

    Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.

    2012-06-01

    This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.

  3. Hydrodynamics of rotating stars and close binary interactions: Compressible ellipsoid models

    NASA Technical Reports Server (NTRS)

    Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.

    1994-01-01

    We develop a new formalism to study the dynamics of fluid polytropes in three dimensions. The stars are modeled as compressible ellipsoids, and the hydrodynamic equations are reduced to a set of ordinary differential equations for the evolution of the principal axes and other global quantities. Both viscous dissipation and the gravitational radiation reaction are incorporated. We establish the validity of our approximations and demonstrate the simplicity and power of the method by rederiving a number of known results concerning the stability and dynamical oscillations of rapidly rotating polytropes. In particular, we present a generalization to compressible fluids of Chandrasekhar's classical results for the secular and dynamical instabilities of incompressible Maclaurin spheroids. We also present several applications of our method to astrophysical problems of great current interest, such as the tidal disruption of a star by a massive black hole, the coalescence of compact binaries driven by the emission of gravitational waves, and the development of instabilities in close binary systems.

  4. Parallelization of a three-dimensional compressible transition code

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Hussaini, M. Y.; Bokhari, Shahid H.

    1990-01-01

    The compressible, three-dimensional, time-dependent Navier-Stokes equations are solved on a 20 processor Flex/32 computer. The code is a parallel implementation of an existing code operational on the Cray-2 at NASA Ames, which performs direct simulations of the initial stages of the transition process of wall-bounded flow at supersonic Mach numbers. Spectral collocation in all three spatial directions (Fourier along the plate and Chebyshev normal to it) ensures high accuracy of the flow variables. By hiding most of the parallelism in low-level routines, the casual user is shielded from most of the nonstandard coding constructs. Speedups of 13 out of a maximum of 16 are achieved on the largest computational grids.

  5. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551

  6. Block-based conditional entropy coding for medical image compression

    NASA Astrophysics Data System (ADS)

    Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng

    2003-05-01

    In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.

  7. Magneto-hydrodynamic calculation of magnetic flux compression using imploding cylindrical liners

    NASA Astrophysics Data System (ADS)

    Zhao, Jibo; Sun, Chengwei; Gu, Zhuowei

    2015-06-01

    Based on the one-dimensional elastic-plastic reactive hydrodynamic code SSS, the one-dimensional magneto-hydrodynamics code SSS/MHD is developed successfully, and calculation is carried for cylindrical magneto cumulative generators (MC-1 device). The magnetic field diffusion into liner and sample tuber is analyzed, and the result shows that the maximum value of magnetic induction intensity to cavity 0.2 mm in liner is only sixteen Tesla, while the one in sample tuber is several hundred Tesla, which is caused by balancing of electromagnetism force and imploding one for the different velocity of liner and sample tuber. The curves of magnetic induction intensity on axes of cavity and the velocity history on the wall of sample tuber are calculated, which accord with the experiment results. The works in this paper account for that code SSS/MHD can be applied in experiment configures of detonation, shock and electromagnetism load and improve of parameter successfully. The experiment data can be estimated, analyzed and checked validly, and the physics course of correlative device can be understood deeply, according to SSS/MHD. This work was supported by the special funds of the National Natural Science Foundation of China under Grant 11176002.

  8. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  9. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  10. A compressible high-order unstructured spectral difference code for stratified convection in rotating spherical shells

    NASA Astrophysics Data System (ADS)

    Wang, Junfeng; Liang, Chunlei; Miesch, Mark S.

    2015-06-01

    We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12 000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar convection, core convection, and convection in rapidly-rotating, oblate stars.

  11. PEGAS: Hydrodynamical code for numerical simulation of the gas components of interacting galaxies

    NASA Astrophysics Data System (ADS)

    Kulikov, Igor

    A new hydrodynamical code for numerical simulation of the gravitational gas dynamics is described in the paper. The code is based on the Fluid-in-Cell method with a Godunov-type scheme at the Eulerian stage. The numerical method was adapted for GPU-based supercomputers. The performance of the code is shown by the simulation of the collision of the gas components of two similar disc galaxies in the course of the central collision of the galaxies in the polar direction.

  12. EvoL: the new Padova Tree-SPH parallel code for cosmological simulations. I. Basic code: gravity and hydrodynamics

    NASA Astrophysics Data System (ADS)

    Merlin, E.; Buonomo, U.; Grassi, T.; Piovan, L.; Chiosi, C.

    2010-04-01

    Context. We present the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution, EvoL. The basic Tree + SPH code is presented and analysed, together with an overview of the software architectures. Aims: EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formations on cluster, galactic and sub-galactic scales. Methods: EvoL is a fully Lagrangian self-adaptive code, based on the classical oct-tree by Barnes & Hut (1986, Nature, 324, 446) and on the smoothed particle hydrodynamics algorithm (SPH, Lucy 1977, AJ, 82, 1013). It includes special features like adaptive softening lengths with correcting extra-terms, and modern formulations of SPH and artificial viscosity. It is designed to be run in parallel on multiple CPUs to optimise the performance and save computational time. Results: We describe the code in detail, and present the results of a number of standard hydrodynamical tests.

  13. A CMOS Imager with Focal Plane Compression using Predictive Coding

    NASA Technical Reports Server (NTRS)

    Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.

    2007-01-01

    This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.

  14. Simulating hypervelocity impact effects on structures using the smoothed particle hydrodynamics code MAGI

    NASA Technical Reports Server (NTRS)

    Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.

    1992-01-01

    Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.

  15. AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2011-04-01

    AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.

  16. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft are presented.

  17. Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code

    SciTech Connect

    Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I

    1998-12-28

    An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.

  18. Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors

    SciTech Connect

    Sale, D.; Jonkman, J.; Musial, W.

    2009-08-01

    This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.

  19. A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Karpenko, Iu.; Huovinen, P.; Bleicher, M.

    2014-11-01

    We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160

  20. Narrative-compression coding for a channel with errors. Professional paper for period ending June 1987

    SciTech Connect

    Bond, J.W.

    1988-01-01

    Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.

  1. Developing a weakly compressible smoothed particle hydrodynamics model for biological flows

    NASA Astrophysics Data System (ADS)

    Vasyliv, Yaroslav; Alexeev, Alexander

    2014-11-01

    Smoothed Particle Hydrodynamics (SPH) is a meshless particle method originally developed for astrophysics applications in 1977. Over the years, limitations of the original formulations have been addressed by different groups to extend the domain of SPH application. In biologically relevant internal flows, two of the several challenges still facing SPH are 1) treatment of inlet, outlet, and no slip boundary conditions and 2) treatment of second derivatives present in the viscous terms. In this work, we develop a 2D weakly compressible SPH (WCSPH) for simulating viscous internal flows which incorporates some of the recent advancements made by groups in the above two areas. The method is validated against several analytical and experimental benchmark solutions for both steady and unsteady laminar flows. In particular, the 2013 U.S. Food and Drug Administration benchmark test case for medical devices - steady forward flow through a nozzle with a sudden contraction and conical diffuser - is simulated for different Reynolds numbers in the laminar region and results are validated against the published experimental and CFD datasets. Support from the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) is gratefully acknowledged.

  2. Design and Analysis of Fast Text Compression Based on Quasi-Arithmetic Coding.

    ERIC Educational Resources Information Center

    Howard, Paul G; Vitter, Jeffrey Scott

    1994-01-01

    Describes a detailed algorithm for fast text compression. Related to the PPM (prediction by partial matching) method, it simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. Details of the use of quasi-arithmetic code tables are given, and their…

  3. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-05-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  4. A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects

    NASA Astrophysics Data System (ADS)

    Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.

    2016-04-01

    Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.

  5. Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.

    1996-08-01

    We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.

  6. Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code

    SciTech Connect

    Burns, Keaton J; /UC, Berkeley, aff SLAC

    2012-08-31

    The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.

  7. Test Compression for Robust Testable Path Delay Fault Testing Using Interleaving and Statistical Coding

    NASA Astrophysics Data System (ADS)

    Namba, Kazuteru; Ito, Hideo

    This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).

  8. Space communication system for compressed data with a concatenated Reed-Solomon-Viterbi coding channel

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Hilbert, E. E. (Inventor)

    1976-01-01

    A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.

  9. Prediction of material strength and fracture of glass using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.

    1994-08-01

    The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  10. Prediction of material strength and fracture of brittle materials using the SPHINX smooth particle hydrodynamics code

    SciTech Connect

    Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.

    1995-12-31

    The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.

  11. A new multidimensional, energy-dependent two-moment transport code for neutrino-hydrodynamics

    NASA Astrophysics Data System (ADS)

    Just, O.; Obergaulinger, M.; Janka, H.-T.

    2015-11-01

    We present the new code ALCAR developed to model multidimensional, multienergy-group neutrino transport in the context of supernovae and neutron-star mergers. The algorithm solves the evolution equations of the zeroth- and first-order angular moments of the specific intensity, supplemented by an algebraic relation for the second-moment tensor to close the system. The scheme takes into account frame-dependent effects of the order O(v/c) as well as the most important types of neutrino interactions. The transport scheme is significantly more efficient than a multidimensional solver of the Boltzmann equation, while it is more accurate and consistent than the flux-limited diffusion method. The finite-volume discretization of the essentially hyperbolic system of moment equations employs methods well-known from hydrodynamics. For the time integration of the potentially stiff moment equations we employ a scheme in which only the local source terms are treated implicitly, while the advection terms are kept explicit, thereby allowing for an efficient computational parallelization of the algorithm. We investigate various problem set-ups in one and two dimensions to verify the implementation and to test the quality of the algebraic closure scheme. In our most detailed test, we compare a fully dynamic, one-dimensional core-collapse simulation with two published calculations performed with well-known Boltzmann-type neutrino-hydrodynamics codes and we find very satisfactory agreement.

  12. Comparison among five hydrodynamic codes with a diverging-converging nozzle experiment

    SciTech Connect

    L. E. Thode; M. C. Cline; B. G. DeVolder; M. S. Sahota; D. K. Zerkle

    1999-09-01

    A realistic open-cycle gas-core nuclear rocket simulation model must be capable of a self-consistent nozzle calculation in conjunction with coupled radiation and neutron transport in three spatial dimensions. As part of the development effort for such a model, five hydrodynamic codes were used to compare with a converging-diverging nozzle experiment. The codes used in the comparison are CHAD, FLUENT, KIVA2, RAMPANT, and VNAP2. Solution accuracy as a function of mesh size is important because, in the near term, a practical three-dimensional simulation model will require rather coarse zoning across the nozzle throat. In the study, four different grids were considered. (1) coarse, radially uniform grid, (2) coarse, radially nonuniform grid, (3) fine, radially uniform grid, and (4) fine, radially nonuniform grid. The study involves code verification, not prediction. In other words, the authors know the solution they want to match, so they can change methods and/or modify an algorithm to best match this class of problem. In this context, it was necessary to use the higher-order methods in both FLUENT and RAMPANT. In addition, KIVA2 required a modification that allows significantly more accurate solutions for a converging-diverging nozzle. From a predictive point of view, code accuracy with no tuning is an important result. The most accurate codes on a coarse grid, CHAD and VNAP2, did not require any tuning. Their main comparison among the codes was the radial dependence of the Mach number across the nozzle throat. All five codes yielded a very similar solution with fine, radially uniform and radially nonuniform grids. However, the codes yielded significantly different solutions with coarse, radially uniform and radially nonuniform grids. For all the codes, radially nonuniform zoning across the throat significantly increased solution accuracy with a coarse mesh. None of the codes agrees in detail with the weak shock located downstream of the nozzle throat, but all the

  13. Evaluation of a Cray performance tool using a large hydrodynamics code

    SciTech Connect

    Lord, K.M.; Simmons, M.L.

    1992-06-01

    This paper will discuss one of these automatic tools that has been developed recently by Cray Research, Inc. for use on its parallel supercomputer. The tool is called ATEXPERT; when used in conjunction with the Cray Fortran compiling system, CF77, it produces a parallelized version of a code based on loop-level parallelism, plus information to enable the programmer to optimize the parallelized code and improve performance. The information obtained through the use of the tool is presented in an easy-to-read graphical format, making the digestion of such a large quantity of data relatively easy and thus, improving programmer productivity. In this paper we address the issues that we found when the took a large Los Alamos hydrodynamics code, PUEBLO, that was highly vectorizable, but not parallelized, and using ATEXPERT proceeded to parallelize it. We show that through the advice of ATEXPERT, bottlenecks in the code can be found, leading to improved performance. We also show the dependence of performance on problem size, and finally, we contrast the speedup predicted by ATEXPERT with that measured on a dedicated eight-processor Y-MP.

  14. Semi-fixed-length motion vector coding for H.263-based low bit rate video compression.

    PubMed

    Côté, G; Gallant, M; Kossentini, F

    1999-01-01

    We present a semi-fixed-length motion vector coding method for H.263-based low bit rate video compression. The method exploits structural constraints within the motion field. The motion vectors are encoded using semi-fixed-length codes, yielding essentially the same levels of rate-distortion performance and subjective quality achieved by H.263's Huffman-based variable length codes in a noiseless environment. However, such codes provide substantially higher error resilience in a noisy environment. PMID:18267417

  15. MULTI2D - a computer code for two-dimensional radiation hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.

    2009-06-01

    Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are

  16. Three-dimensional hydrodynamic Bondi-Hoyle accretion. 1: Code validation and stationary accretors

    NASA Technical Reports Server (NTRS)

    Ruffert, Maximilian

    1994-01-01

    We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. Totally absorbing stationary spheres of varying sizes (from 10.0 down to 0.02 Bondi radii) accrete matter from a homogeneous and slightly perturbed medium, which is taken to be an ideal gas (gamma = 5/3 or 1.2). To accommodate the long-range gravitational forces, the extent of the computational volume is typically a factor of 100 larger than the radius of the accretor. We compare the numerical mass accretion rates with the theoretical predictions of Bondi, to assess the validity of the code. The hydrodynamics is modeled by the piecewise parabolic method. No energy sources (nuclear burning) or sinks (radiation, conduction) are included. The resolution in the vicinity of the accretor is increased by multiply nesting several (6-8) grids around the stationary sphere, each finer grid being a factor of 2 smaller spatially than the next coarser grid. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid while at the same time evolving the gas on the coarser grids. The accretion rates derived numerically are in in very good agreement (to about 10% over several orders of magnitude) with the values given by Bondi for a stationary accretor within a hydrodynamic medium. However, the equations have to be changed in order to include the finite size of the accretor (in some cases very large compared to the sonic point or even to the Bondi radius).

  17. Theoretical study of use of optical orthogonal codes for compressed video transmission in optical code division multiple access (OCDMA) system

    NASA Astrophysics Data System (ADS)

    Ghosh, Shila; Chatterji, B. N.

    2007-09-01

    A theoretical investigation to evaluate the performance of optical code division multiple access (OCDMA) for compressed video transmission is shown. OCDMA has many advantages than a typical synchronous protocol time division multiple access (TDMA). A pulsed laser transmission of multi channel digital video can be done using various techniques depending on whether the multi channel data are to be synchronous or asynchronous. A typical form of asynchronous digital operation is wavelength division multiplexing (WDM) in which the digital data of each video source are assigned a specific and separate wavelength. A sophisticated hardware such as accurate wavelength control of all lasers and tunable narrow band optical filters at the receivers is required in this case. A major disadvantage with CDMA is the reduction in per channel data rate (relative to the speeds available in the laser itself) that occurs in the insertion of code addressing. Hence optical CDMA for the video transmission application is meaningful when individual channel video bit rates can be significantly reduced and that can be done by compression of video data. In our work for compression of video image standard JPEG is implemented where a compression ratio of about 60 % is obtained without noticeable image degradation. Compared to the other existing techniques JPEG standard achieves higher compression ration with high S/N ratio. Here we demonstrated the auto and cross correlation properties of the codes. We have shown the implementation of bipolar Walsh coding in OCDMA system and their use in transmission of image/video.

  18. Context-based lossless image compression with optimal codes for discretized Laplacian distributions

    NASA Astrophysics Data System (ADS)

    Giurcaneanu, Ciprian Doru; Tabus, Ioan; Stanciu, Cosmin

    2003-05-01

    Lossless image compression has become an important research topic, especially in relation with the JPEG-LS standard. Recently, the techniques known for designing optimal codes for sources with infinite alphabets have been applied for the quantized Laplacian sources which have probability mass functions with two geometrically decaying tails. Due to the simple parametric model of the source distribution the Huffman iterations are possible to be carried out analytically, using the concept of reduced source, and the final codes are obtained as a sequence of very simple arithmetic operations, avoiding the need to store coding tables. We propose the use of these (optimal) codes in conjunction with context-based prediction, for noiseless compression of images. To reduce further the average code length, we design Escape sequences to be employed when the estimation of the distribution parameter is unreliable. Results on standard test files show improvements in compression ratio when comparing with JPEG-LS.

  19. Research on compression and improvement of vertex chain code

    NASA Astrophysics Data System (ADS)

    Yu, Guofang; Zhang, Yujie

    2009-10-01

    Combined with the Huffman encoding theory, the code 2 with highest emergence-probability and continution-frequency is indicated by a binary number 0,the combination of 1 and 3 with higher emergence-probability and continutionfrequency are indicated by two binary number 10,and the corresponding frequency-code are attached to the two kinds of code,the length of the frequency-code can be assigned beforehand or adaptive automatically,the code 1 and 3 with lowest emergence-probability and continution-frequency are indicated by the binary number 110 and 111 respectively.The relative encoding efficiency and decoding efficiency are supplemented to the current performance evaluation system of the chain code.the new chain code is compared with a current chain code through the test system progamed by VC++, the results show that the basic performances of the new chain code are significantly improved, and the performance advantages are improved with the size increase of the graphics.

  20. Introducing Flow-er: a Hydrodynamics Code for Relativistic and Newtonian Flows

    NASA Astrophysics Data System (ADS)

    Motl, P. M.; Tohline, J. E.; Lehner, L.

    2005-12-01

    We present a new numerical code (Flow-er) for calculating astrophysical flows in 1, 2 or 3 dimensions. We have implemented equations appropriate for the treatment of Newtonian gravity as well as the general relativistic formalism to treat flows with either a static or dynamic metric. The heart of the code is the recent non-oscillatory central difference scheme by Kurganov and Tadmor (2000; hereafter KT). With this technique, we do not require a characteristic decomposition or the solution of Riemann problems that are required by most other high resolution, shock capturing techniques. Furthermore, the KT scheme naturally incorporates the Method of Lines, allowing considerable flexibility in the choice of time integrators. We have implemented several interpolation kernels that allow us to choose the spatial accuracy of an evolution. Through the Cactus framework or independent code, Flow-er serves as a driver for the hydrodynamical portion of a simulation utilizing adaptive mesh refinement or on a unigrid. In addition to describing Flow-er, we present results from several test problems. We are pleased to acknowledge support for this work from the National Science Foundation through grants PHY-0326311 and AST-0407070.

  1. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  2. Medical Image Compression Using a New Subband Coding Method

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug

    1995-01-01

    A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.

  3. Hydrodynamic code calculations of airblast for an explosive test in a shallow underground storage magazine

    NASA Astrophysics Data System (ADS)

    Kennedy, Lynn W.; Schneider, Kenneth D.

    1990-07-01

    A large-sclae test of the detonation of 20,000 kilograms of high explosive inside a shallow underground tunnel/chamber complex, simulating an ammunition storage magazine, was carried out in August, 1988, at the Naval Weapons Center, China Lake, California. The test was jointly sponsored by the U.S. Department of Defense Explosives Safety Board; the Safety Services Organisation of the Ministry of Defence, United Kingdom; and the Norwegian Defence Construction Service. The overall objective of the test was to determine the hazardous effects (debris, airblast, and ground motion) produced in this configuration. Actual storage magazines have considerably more overburden and are expected to contain and accidental detonation. The test configuration, on the other hand, was expected to rupture, and to scatter a significant amount of rocks, dirt and debris. Among the observations and measurements made in this test was study of airblast propagation within the storage chamber, in the access tunnel, and outside, on the tunnel ramp, prior to overburden venting. The results of these observations are being used to evaluate and validate current quantity-distance standards for the underground storage of munitions near inabited structures. As part of the prediction effort for this test, to assist with transducer ranging in the access tunnel and with post-test interpretation of the results, S-CUBED was asked to perform two-dimensional inviscid hydrodynamic code calculations of the explosive detonation and subsequent blastwave propagation in the interior chamber and access tunnel. This was accomplished using the S-CUBED Hydrodynamic Advanced Research Code (SHARC). In this paper, details of the calculations configuration will be presented. These will be compared to the actual as-built internal configuration of the tunnel/chamber complex. Results from the calculations, including contour plots and airblast waveforms, will be shown. The latter will be compared with experimental records

  4. Comparison study of EMG signals compression by methods transform using vector quantization, SPIHT and arithmetic coding.

    PubMed

    Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre

    2016-01-01

    In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT. PMID:27104132

  5. Speech coding and compression using wavelets and lateral inhibitory networks

    NASA Astrophysics Data System (ADS)

    Ricart, Richard

    1990-12-01

    The purpose of this thesis is to introduce the concept of lateral inhibition as a generalized technique for compressing time/frequency representations of electromagnetic and acoustical signals, particularly speech. This requires at least a rudimentary treatment of the theory of frames- which generalizes most commonly known time/frequency distributions -the biology of hearing, and digital signal processing. As such, this material, along with the interrelationships of the disparate subjects, is presented in a tutorial style. This may leave the mathematician longing for more rigor, the neurophysiological psychologist longing for more substantive support of the hypotheses presented, and the engineer longing for a reprieve from the theoretical barrage. Despite the problems that arise when trying to appeal to too wide an audience, this thesis should be a cogent analysis of the compression of time/frequency distributions via lateral inhibitory networks.

  6. Pulse code modulation data compression for automated test equipment

    SciTech Connect

    Navickas, T.A.; Jones, S.G.

    1991-05-01

    Development of automated test equipment for an advanced telemetry system requires continuous monitoring of PCM data while exercising telemetry inputs. This requirements leads to a large amount of data that needs to be stored and later analyzed. For example, a data stream of 4 Mbits/s and a test time of thirty minutes would yield 900 Mbytes of raw data. With this raw data, information needs to be stored to correlate the raw data to the test stimulus. This leads to a total of 1.8 Gb of data to be stored and analyzed. There is no method to analyze this amount of data in a reasonable time. A data compression method is needed to reduce the amount of data collected to a reasonable amount. The solution to the problem was data reduction. Data reduction was accomplished by real time limit checking, time stamping, and smart software. Limit checking was accomplished by an eight state finite state machine and four compression algorithms. Time stamping was needed to correlate stimulus to the appropriate output for data reconstruction. The software was written in the C programming language with a DOS extender used to allow it to run in extended mode. A 94--98% compression in the amount of data gathered was accomplished using this method. 1 fig.

  7. Joint source-channel coding: secured and progressive transmission of compressed medical images on the Internet.

    PubMed

    Babel, Marie; Parrein, Benoît; Déforges, Olivier; Normand, Nicolas; Guédon, Jean-Pierre; Coat, Véronique

    2008-06-01

    The joint source-channel coding system proposed in this paper has two aims: lossless compression with a progressive mode and the integrity of medical data, which takes into account the priorities of the image and the properties of a network with no guaranteed quality of service. In this context, the use of scalable coding, locally adapted resolution (LAR) and a discrete and exact Radon transform, known as the Mojette transform, meets this twofold requirement. In this paper, details of this joint coding implementation are provided as well as a performance evaluation with respect to the reference CALIC coding and to unequal error protection using Reed-Solomon codes. PMID:18289830

  8. Introducing Flow-er: a Hydrodynamics Code for Relativistic and Newtonian Flows

    NASA Astrophysics Data System (ADS)

    Motl, Patrick; Olabarrieta, Ignacio; Tohline, Joel

    2006-04-01

    We present a new numerical code (Flow-er) for calculating astrophysical flows in 1, 2 or 3 dimensions. We have implemented equations appropriate for the treatment of Newtonian gravity as well as the general relativistic formalism to treat flows with either a static or dynamic metric. The heart of the code is the recent non-oscillatory central difference scheme by Kurganov and Tadmor (2000). With this technique, we do not require a characteristic decomposition or the solution of Riemann problems that are required by most other high resolution, shock capturing techniques. Furthermore, the KT scheme naturally incorporates the Method of Lines, allowing considerable flexibility in the choice of time integrators. We have implemented several interpolation kernels that allow us to choose the spatial accuracy of an evolution. Flow-er has been tested against an independent implementation of the KT scheme to solve the relativistic equations in 1d - which we also describe. Flow-er can serve as the driver for the hydrodynamical portion of a simulation utilizing adaptive mesh refinement or on a unigrid. In addition to describing Flow-er, we present results from several test problems.

  9. Priority-based error correction using turbo codes for compressed AIRS data

    NASA Astrophysics Data System (ADS)

    Gladkova, I.; Grossberg, M.; Grayver, E.; Olsen, D.; Nalli, N.; Wolf, W.; Zhou, L.; Goldberg, M.

    2006-08-01

    Errors due to wireless transmission can have an arbitrarily large impact on a compressed file. A single bit error appearing in the compressed file can propagate during a decompression procedure and destroy the entire granule. Such a loss is unacceptable since this data is critical for a range of applications, including weather prediction and emergency response planning. The impact of a bit error in the compressed granule is very sensitive to the error's location in the file. There is a natural hierarchy of compressed data in terms of impact on the final retrieval products. For the considered compression scheme, errors in some parts of the data yield no noticeable degradation in the final products. We formulate a priority scheme for the compressed data and present an error correction approach based on minimizing impact on the retrieval products. Forward error correction codes (e.g., turbo, LDPC) allow the tradeoff between error correction strength and file inflation (bandwidth expansion). We propose segmenting the compressed data based on its priority and applying different-strength FEC codes to different segments. In this paper we demonstrate that this approach can achieve negligible product degradation while maintaining an overall 3-to-1 compression ratio on the final file. We apply this to AIRS sounder data to demonstrate viability for the sounder on the next-generation GOES-R platform.

  10. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W.; Burge, S.W.

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL`s pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  11. FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations

    SciTech Connect

    Ding, Jianmin; Lyczkowski, R.W. ); Burge, S.W. . Research Center)

    1993-02-01

    A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.

  12. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  13. User manual for INVICE 0.1-beta : a computer code for inverse analysis of isentropic compression experiments.

    SciTech Connect

    Davis, Jean-Paul

    2005-03-01

    INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.

  14. Ultraspectral sounder data compression using error-detecting reversible variable-length coding

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Huang, Hung-Lung; Schmit, Timothy J.; Heymann, Roger W.

    2005-08-01

    Nonreversible variable-length codes (e.g. Huffman coding, Golomb-Rice coding, and arithmetic coding) have been used in source coding to achieve efficient compression. However, a single bit error during noisy transmission can cause many codewords to be misinterpreted by the decoder. In recent years, increasing attention has been given to the design of reversible variable-length codes (RVLCs) for better data transmission in error-prone environments. RVLCs allow instantaneous decoding in both directions, which affords better detection of bit errors due to synchronization losses over a noisy channel. RVLCs have been adopted in emerging video coding standards--H.263+ and MPEG-4--to enhance their error-resilience capabilities. Given the large volume of three-dimensional data that will be generated by future space-borne ultraspectral sounders (e.g. IASI, CrIS, and HES), the use of error-robust data compression techniques will be beneficial to satellite data transmission. In this paper, we investigate a reversible variable-length code for ultraspectral sounder data compression, and present its numerical experiments on error propagation for the ultraspectral sounder data. The results show that the RVLC performs significantly better error containment than JPEG2000 Part 2.

  15. Application of grammar-based codes for lossless compression of digital mammograms

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli; Krishnan, Srithar; Ma, Ngok-Wah

    2006-01-01

    A newly developed grammar-based lossless source coding theory and its implementation was proposed in 1999 and 2000, respectively, by Yang and Kieffer. The code first transforms the original data sequence into an irreducible context-free grammar, which is then compressed using arithmetic coding. In the study of grammar-based coding for mammography applications, we encountered two issues: processing time and limited number of single-character grammar G variables. For the first issue, we discover a feature that can simplify the matching subsequence search in the irreducible grammar transform process. Using this discovery, an extended grammar code technique is proposed and the processing time of the grammar code can be significantly reduced. For the second issue, we propose to use double-character symbols to increase the number of grammar variables. Under the condition that all the G variables have the same probability of being used, our analysis shows that the double- and single-character approaches have the same compression rates. By using the methods proposed, we show that the grammar code can outperform three other schemes: Lempel-Ziv-Welch (LZW), arithmetic, and Huffman on compression ratio, and has similar error tolerance capabilities as LZW coding under similar circumstances.

  16. Onset of hydrodynamic mix in high-velocity, highly compressed inertial confinement fusion implosions.

    PubMed

    Ma, T; Patel, P K; Izumi, N; Springer, P T; Key, M H; Atherton, L J; Benedetti, L R; Bradley, D K; Callahan, D A; Celliers, P M; Cerjan, C J; Clark, D S; Dewald, E L; Dixit, S N; Döppner, T; Edgell, D H; Epstein, R; Glenn, S; Grim, G; Haan, S W; Hammel, B A; Hicks, D; Hsing, W W; Jones, O S; Khan, S F; Kilkenny, J D; Kline, J L; Kyrala, G A; Landen, O L; Le Pape, S; MacGowan, B J; Mackinnon, A J; MacPhee, A G; Meezan, N B; Moody, J D; Pak, A; Parham, T; Park, H-S; Ralph, J E; Regan, S P; Remington, B A; Robey, H F; Ross, J S; Spears, B K; Smalyuk, V; Suter, L J; Tommasini, R; Town, R P; Weber, S V; Lindl, J D; Edwards, M J; Glenzer, S H; Moses, E I

    2013-08-23

    Deuterium-tritium inertial confinement fusion implosion experiments on the National Ignition Facility have demonstrated yields ranging from 0.8 to 7×10(14), and record fuel areal densities of 0.7 to 1.3 g/cm2. These implosions use hohlraums irradiated with shaped laser pulses of 1.5-1.9 MJ energy. The laser peak power and duration at peak power were varied, as were the capsule ablator dopant concentrations and shell thicknesses. We quantify the level of hydrodynamic instability mix of the ablator into the hot spot from the measured elevated absolute x-ray emission of the hot spot. We observe that DT neutron yield and ion temperature decrease abruptly as the hot spot mix mass increases above several hundred ng. The comparison with radiation-hydrodynamic modeling indicates that low mode asymmetries and increased ablator surface perturbations may be responsible for the current performance. PMID:24010449

  17. Non-US data compression and coding research. FASAC Technical Assessment Report

    SciTech Connect

    Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.

    1993-11-01

    This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.

  18. Research on spatial coding compressive spectral imaging and its applicability for rural survey

    NASA Astrophysics Data System (ADS)

    Chen, Yuheng; Ji, Yiqun; Zhou, Jiankang; Chen, Xinhua; Shen, Weimin

    Compressive spectral imaging combines traditional spectral imaging method with new concept of compressive sensing thus has the advantages such as reducing acquisition data amount, realizing snapshot imaging for large field of view and increasing image signal-to-noise and its preliminary application effectiveness has been explored by early usage on the occasions such as high-speed imaging and fluorescent imaging. In this paper, the application potentiality for spatial coding compressive spectral imaging technique on rural survey is revealed. The physical model for spatial coding compressive spectral imaging is built on which its data flow procession is analyzed and its data reconstruction issue is concluded. The existing sparse reconstruction methods are reviewed thus specific module based on the two-step iterative shrinkage/thresholding algorithm is built so as to execute the imaging data reconstruction. The simulating imaging experiment based on AVIRIS visible band data of a specific selected rural scene is carried out. The spatial identification and spectral featuring extraction capacity for different ground species are evaluated by visual judgment of both single band image and spectral curve. The data fidelity evaluation parameters (RMSE and PSNR) are put forward so as to verify the data fidelity maintaining ability of this compressive imaging method quantitatively. The application potentiality of spatial coding compressive spectral imaging on rural survey, crop monitoring, vegetation inspection and further agricultural development demand is verified in this paper.

  19. Energy requirements for quantum data compression and 1-1 coding

    SciTech Connect

    Rallan, Luke; Vedral, Vlatko

    2003-10-01

    By looking at quantum data compression in the second quantization, we present a model for the efficient generation and use of variable length codes. In this picture, lossless data compression can be seen as the minimum energy required to faithfully represent or transmit classical information contained within a quantum state. In order to represent information, we create quanta in some predefined modes (i.e., frequencies) prepared in one of the two possible internal states (the information carrying degrees of freedom). Data compression is now seen as the selective annihilation of these quanta, the energy of which is effectively dissipated into the environment. As any increase in the energy of the environment is intricately linked to any information loss and is subject to Landauer's erasure principle, we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol. In line with the work of Bostroem and Felbinger [Phys. Rev. A 65, 032313 (2002)], we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of this restraint, we translate existing classical results on 1-1 coding to the quantum domain to derive a new upper bound on the compression of quantum information. Finally, we present a simple quantum circuit to implement our scheme.

  20. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    PubMed Central

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744

  1. Embedded zeroblock coding algorithm based on KLT and wavelet transform for hyperspectral image compression

    NASA Astrophysics Data System (ADS)

    Hou, Ying

    2009-10-01

    In this paper, a hyperspectral image lossy coder using three-dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm based on Karhunen-Loève transform (KLT) and wavelet transform (WT) is proposed. This coding scheme adopts 1D KLT as spectral decorrelator and 2D WT as spatial decorrelator. Furthermore, the computational complexity and the coding performance of the low-complexity KLT are compared and evaluated. In comparison with several stateof- the-art coding algorithms, experimental results indicate that our coder can achieve better lossy compression performance.

  2. Global Time Dependent Solutions of Stochastically Driven Standard Accretion Disks: Development of Hydrodynamical Code

    NASA Astrophysics Data System (ADS)

    Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev

    2016-07-01

    X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.

  3. Numerical Simulation of Supersonic Compression Corners and Hypersonic Inlet Flows Using the RPLUS2D Code

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.

  4. Research on Differential Coding Method for Satellite Remote Sensing Data Compression

    NASA Astrophysics Data System (ADS)

    Lin, Z. J.; Yao, N.; Deng, B.; Wang, C. Z.; Wang, J. H.

    2012-07-01

    Data compression, in the process of Satellite Earth data transmission, is of great concern to improve the efficiency of data transmission. Information amounts inherent to remote sensing images provide a foundation for data compression in terms of information theory. In particular, distinct degrees of uncertainty inherent to distinct land covers result in the different information amounts. This paper first proposes a lossless differential encoding method to improve compression rates. Then a district forecast differential encoding method is proposed to further improve the compression rates. Considering the stereo measurements in modern photogrammetry are basically accomplished by means of automatic stereo image matching, an edge protection operator is finally utilized to appropriately filter out high frequency noises which could help magnify the signals and further improve the compression rates. The three steps were applied to a Landsat TM multispectral image and a set of SPOT-5 panchromatic images of four typical land cover types (i.e., urban areas, farm lands, mountain areas and water bodies). Results revealed that the average code lengths obtained by the differential encoding method, compared with Huffman encoding, were more close to the information amounts inherent to remote sensing images. And the compression rates were improved to some extent. Furthermore, the compression rates of the four land cover images obtained by the district forecast differential encoding method were nearly doubled. As for the images with the edge features preserved, the compression rates are average four times as large as those of the original images.

  5. THC: a new high-order finite-difference high-resolution shock-capturing code for special-relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Radice, D.; Rezzolla, L.

    2012-11-01

    We present THC: a new high-order flux-vector-splitting code for Newtonian and special-relativistic hydrodynamics designed for direct numerical simulations of turbulent flows. Our code implements a variety of different reconstruction algorithms, such as the popular weighted essentially non oscillatory and monotonicity-preserving schemes, or the more specialised bandwidth-optimised WENO scheme that has been specifically designed for the study of compressible turbulence. We show the first systematic comparison of these schemes in Newtonian physics as well as for special-relativistic flows. In particular we will present the results obtained in simulations of grid-aligned and oblique shock waves and nonlinear, large-amplitude, smooth adiabatic waves. We will also discuss the results obtained in classical benchmarks such as the double-Mach shock reflection test in Newtonian physics or the linear and nonlinear development of the relativistic Kelvin-Helmholtz instability in two and three dimensions. Finally, we study the turbulent flow induced by the Kelvin-Helmholtz instability and we show that our code is able to obtain well-converged velocity spectra, from which we benchmark the effective resolution of the different schemes.

  6. Implementation of a simple model for linear and nonlinear mixing at unstable fluid interfaces in hydrodynamics codes

    SciTech Connect

    Ramshaw, J D

    2000-10-01

    A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.

  7. [A quality controllable algorithm for ECG compression based on wavelet transform and ROI coding].

    PubMed

    Zhao, An; Wu, Baoming

    2006-12-01

    This paper presents an ECG compression algorithm based on wavelet transform and region of interest (ROI) coding. The algorithm has realized near-lossless coding in ROI and quality controllable lossy coding outside of ROI. After mean removal of the original signal, multi-layer orthogonal discrete wavelet transform is performed. Simultaneously,feature extraction is performed on the original signal to find the position of ROI. The coefficients related to the ROI are important coefficients and kept. Otherwise, the energy loss of the transform domain is calculated according to the goal PRDBE (Percentage Root-mean-square Difference with Baseline Eliminated), and then the threshold of the coefficients outside of ROI is determined according to the loss of energy. The important coefficients, which include the coefficients of ROI and the coefficients that are larger than the threshold outside of ROI, are put into a linear quantifier. The map, which records the positions of the important coefficients in the original wavelet coefficients vector, is compressed with a run-length encoder. Huffman coding has been applied to improve the compression ratio. ECG signals taken from the MIT/BIH arrhythmia database are tested, and satisfactory results in terms of clinical information preserving, quality and compress ratio are obtained. PMID:17228703

  8. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    NASA Astrophysics Data System (ADS)

    Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.

    2013-08-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.

  9. Random wavelet transforms, algebraic geometric coding, and their applications in signal compression and de-noising

    SciTech Connect

    Bieleck, T.; Song, L.M.; Yau, S.S.T.; Kwong, M.K.

    1995-07-01

    The concepts of random wavelet transforms and discrete random wavelet transforms are introduced. It is shown that these transforms can lead to simultaneous compression and de-noising of signals that have been corrupted with fractional noises. Potential applications of algebraic geometric coding theory to encode the ensuing data are also discussed.

  10. Ultraspectral sounder data compression using the non-exhaustive Tunstall coding

    NASA Astrophysics Data System (ADS)

    Wei, Shih-Chieh; Huang, Bormin

    2008-08-01

    With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to- fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in terms of compression ratio and percent error rate using the ultraspectral sounder data.

  11. Data compression in wireless sensors network using MDCT and embedded harmonic coding.

    PubMed

    Alsalaet, Jaafar K; Ali, Abduladhem A

    2015-05-01

    One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests. PMID:25541332

  12. Radiological image compression using error-free irreversible two-dimensional direct-cosine-transform coding techniques.

    PubMed

    Huang, H K; Lo, S C; Ho, B K; Lou, S L

    1987-05-01

    Some error-free and irreversible two-dimensional direct-cosine-transform (2D-DCT) coding, image-compression techniques applied to radiological images are discussed in this paper. Run-length coding and Huffman coding are described, and examples are given for error-free image compression. In the case of irreversible 2D-DCT coding, the block-quantization technique and the full-frame bit-allocation (FFBA) technique are described. Error-free image compression can achieve a compression ratio from 2:1 to 3:1, whereas the irreversible 2D-DCT coding compression technique can, in general, achieve a much higher acceptable compression ratio. The currently available block-quantization hardware may lead to visible block artifacts at certain compression ratios, but FFBA may be employed with the same or higher compression ratios without generating such artifacts. An even higher compression ratio can be achieved if the image is compressed by using first FFBA and then Huffman coding. The disadvantages of FFBA are that it is sensitive to sharp edges and no hardware is available. This paper also describes the design of the FFBA technique. PMID:3598750

  13. Compression and Encryption of ECG Signal Using Wavelet and Chaotically Huffman Code in Telemedicine Application.

    PubMed

    Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz

    2016-03-01

    In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641

  14. Split field coding: low complexity error-resilient entropy coding for image compression

    NASA Astrophysics Data System (ADS)

    Meany, James J.; Martens, Christopher J.

    2008-08-01

    In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.

  15. Compression performance of HEVC and its format range and screen content coding extensions

    NASA Astrophysics Data System (ADS)

    Li, Bin; Xu, Jizheng; Sullivan, Gary J.

    2015-09-01

    This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.

  16. Correlation channel modeling for practical Slepian-Wolf distributed video compression system using irregular LDPC codes

    NASA Astrophysics Data System (ADS)

    Li, Li; Hu, Xiao; Zeng, Rui

    2007-11-01

    The development of practical distributed video coding schemes is based on the consequence of information-theoretic bounds established in the 1970s by Slepian and Wolf for distributed lossless coding, and by Wyner and Ziv for lossy coding with decoder side information. In distributed video compression application, it is hard to accurately describe the non-stationary behavior of the virtual correlation channel between X and side information Y although it plays a very important role in overall system performance. In this paper, we implement a practical Slepian-Wolf asymmetric distributed video compression system using irregular LDPC codes. Moreover, based on exploiting the dependencies of previously decode bit planes from video frame X and side information Y, we present improvement schemes to divide different reliable regions. Our simulation results show improving schemes of exploiting the dependencies between previously decoded bit planes can get better overall encoding rate performance as BER approach zero. We also show, compared with BSC model, BC channel model is more suitable for distributed video compression scenario because of the non-stationary properties of the virtual correlation channel and adaptive detecting channel model parameters from previously adjacent decoded bit planes can provide more accurately initial belief messages from channel at LDPC decoder.

  17. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  18. Combining node-centered parallel radiation transport and higher-order multi-material cell-centered hydrodynamics methods in three-temperature radiation hydrodynamics code TRHD

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2016-06-01

    Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good

  19. One-Dimensional Lagrangian Code for Plasma Hydrodynamic Analysis of a Fusion Pellet Driven by Ion Beams.

    Energy Science and Technology Software Center (ESTSC)

    1986-12-01

    Version 00 The MEDUSA-IB code performs implosion and thermonuclear burn calculations of an ion beam driven ICF target, based on one-dimensional plasma hydrodynamics and transport theory. It can calculate the following values in spherical geometry through the progress of implosion and fuel burnup of a multi-layered target. (1) Hydrodynamic velocities, density, ion, electron and radiation temperature, radiation energy density, Rs and burn rate of target as a function of coordinates and time, (2) Fusion gainmore » as a function of time, (3) Ionization degree, (4) Temperature dependent ion beam energy deposition, (5) Radiation, -particle and neutron spectra as a function of time.« less

  20. Compressed Reactive Turbulence and Supernovae Ia Recollapse using the FLASH code

    NASA Astrophysics Data System (ADS)

    Dursi, J.; Niemeyer, J.; Calder, A.; Fryxell, B.; Lamb, D.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F.; Tufo, H.; Zingale, M.

    1999-12-01

    The collapse of turbulent fluid, apart from being interesting for its own sake, is also of interest to the supernova problem; a failed ignition can cause a turbulent re-collapse, which might lead to a subsequent reignition under more favourable circumstances. We use the FLASH code, developed at the Center on Astrophysical Thermonuclear Flashes, to run small-scale DNS of the evolution of a compressible, combustible turbulent fluid under the effect of a forced radial homogeneous compression. We follow the evolution of density and temperature fluctuations over the compression history. This work is supported by the Department of Energy under Grant No. B341495 to the Center for Astrophysical Thermonuclear Flashes at the University of Chicago.

  1. Investigation of perception-oriented coding techniques for video compression based on large block structures

    NASA Astrophysics Data System (ADS)

    Kaprykowsky, Hagen; Doshkov, Dimitar; Hoffmann, Christoph; Ndjiki-Nya, Patrick; Wiegand, Thomas

    2011-09-01

    Recent investigations have shown that one of the most beneficial elements for higher compression performance in highresolution video is the incorporation of larger block structures. In this work, we will address the question of how to incorporate perceptual aspects into new video coding schemes based on large block structures. This is rooted in the fact that especially high frequency regions such as textures yield high coding costs when using classical prediction modes as well as encoder control based on the mean squared error. To overcome this problem, we will investigate the incorporation of novel intra predictors based on image completion methods. Furthermore, the integration of a perceptualbased encoder control using the well-known structural similarity index will be analyzed. A major aspect of this article is the evaluation of the coding results in a quantitative (i.e. statistical analysis of changes in mode decisions) as well as qualitative (i.e. coding efficiency) manner.

  2. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    NASA Astrophysics Data System (ADS)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  3. A secure approach for encrypting and compressing biometric information employing orthogonal code and steganography

    NASA Astrophysics Data System (ADS)

    Islam, Muhammad F.; Islam, Mohammed N.

    2012-04-01

    The objective of this paper is to develop a novel approach for encryption and compression of biometric information utilizing orthogonal coding and steganography techniques. Multiple biometric signatures are encrypted individually using orthogonal codes and then multiplexed together to form a single image, which is then embedded in a cover image using the proposed steganography technique. The proposed technique employs three least significant bits for this purpose and a secret key is developed to choose one from among these bits to be replaced by the corresponding bit of the biometric image. The proposed technique offers secure transmission of multiple biometric signatures in an identification document which will be protected from unauthorized steganalysis attempt.

  4. Optical information authentication using compressed double-random-phase-encoded images and quick-response codes.

    PubMed

    Wang, Xiaogang; Chen, Wen; Chen, Xudong

    2015-03-01

    In this paper, we develop a new optical information authentication system based on compressed double-random-phase-encoded images and quick-response (QR) codes, where the parameters of optical lightwave are used as keys for optical decryption and the QR code is a key for verification. An input image attached with QR code is first optically encoded in a simplified double random phase encoding (DRPE) scheme without using interferometric setup. From the single encoded intensity pattern recorded by a CCD camera, a compressed double-random-phase-encoded image, i.e., the sparse phase distribution used for optical decryption, is generated by using an iterative phase retrieval technique with QR code. We compare this technique to the other two methods proposed in literature, i.e., Fresnel domain information authentication based on the classical DRPE with holographic technique and information authentication based on DRPE and phase retrieval algorithm. Simulation results show that QR codes are effective on improving the security and data sparsity of optical information encryption and authentication system. PMID:25836845

  5. Gaseous Laser Targets and Optical Dignostics for Studying Compressible Turbulent Hydrodynamic Instabilities

    SciTech Connect

    Edwards, M J; Hansen, J; Miles, A R; Froula, D; Gregori, G; Glenzer, S; Edens, A; Dittmire, T

    2005-02-08

    The possibility of studying compressible turbulent flows using gas targets driven by high power lasers and diagnosed with optical techniques is investigated. The potential advantage over typical laser experiments that use solid targets and x-ray diagnostics is more detailed information over a larger range of spatial scales. An experimental system is described to study shock - jet interactions at high Mach number. This consists of a mini-chamber full of nitrogen at a pressure {approx} 1 atms. The mini-chamber is situated inside a much larger vacuum chamber. An intense laser pulse ({approx}100J in {approx} 5ns) is focused on to a thin {approx} 0.3{micro}m thick silicon nitride window at one end of the mini-chamber. The window acts both as a vacuum barrier, and laser entrance hole. The ''explosion'' caused by the deposition of the laser energy just inside the window drives a strong blast wave out into the nitrogen atmosphere. The spherical shock expands and interacts with a jet of xenon introduced though the top of the mini-chamber. The Mach number of the interaction is controlled by the separation of the jet from the explosion. The resulting flow is visualized using an optical schlieren system using a pulsed laser source at a wavelength of 0.53 {micro}m. The technical path leading up to the design of this experiment is presented, and future prospects briefly considered. Lack of laser time in the final year of the project severely limited experimental results obtained using the new apparatus.

  6. Application Of Hadamard, Haar, And Hadamard-Haar Transformation To Image Coding And Bandwidth Compression

    NASA Astrophysics Data System (ADS)

    Choras, Ryszard S.

    1983-03-01

    The paper presents a numerical techniques of transform image coding for the image codklg for the image bandwidth compression. Unitary transformations called Hadamard, Haar and Hadamard-Haar transformations are definied and developed. 'Te described the construction of the transformation matrices and presents algorithms for computation of the transformations and theirs inverse. Considered transformations are asolied to iiaa e processing and theirs utility and effectiveness are compared with other discrete transforms on the basic of some standard performance criteria.

  7. Channel coding and data compression system considerations for efficient communication of planetary imaging data

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1974-01-01

    End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.

  8. Recent Hydrodynamics Improvements to the RELAP5-3D Code

    SciTech Connect

    Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz

    2009-07-01

    The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.

  9. Thermodynamic analysis of five compressed-air energy-storage cycles. [Using CAESCAP computer code

    SciTech Connect

    Fort, J. A.

    1983-03-01

    One important aspect of the Compressed-Air Energy-Storage (CAES) Program is the evaluation of alternative CAES plant designs. The thermodynamic performance of the various configurations is particularly critical to the successful demonstration of CAES as an economically feasible energy-storage option. A computer code, the Compressed-Air Energy-Storage Cycle-Analysis Program (CAESCAP), was developed in 1982 at the Pacific Northwest Laboratory. This code was designed specifically to calculate overall thermodynamic performance of proposed CAES-system configurations. The results of applying this code to the analysis of five CAES plant designs are presented in this report. The designs analyzed were: conventional CAES; adiabatic CAES; hybrid CAES; pressurized fluidized-bed CAES; and direct coupled steam-CAES. Inputs to the code were based on published reports describing each plant cycle. For each cycle analyzed, CAESCAP calculated the thermodynamic station conditions and individual-component efficiencies, as well as overall cycle-performance-parameter values. These data were then used to diagram the availability and energy flow for each of the five cycles. The resulting diagrams graphically illustrate the overall thermodynamic performance inherent in each plant configuration, and enable a more accurate and complete understanding of each design.

  10. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  11. Improvement Text Compression Performance Using Combination of Burrows Wheeler Transform, Move to Front, and Huffman Coding Methods

    NASA Astrophysics Data System (ADS)

    Aprilianto, Mohammada; Abdurohman, Maman

    2014-04-01

    Text is a media that is often used to convey information in both wired and wireless-based network. One limitation of the wireless system is the network bandwidth. In this study we implemented a text compression application with lossless compression technique using combination of Burrows wheeler transform, move to front, and Huffman coding methods. With the addition of the compression of the text, it is expected to save network resources. This application provides information about compression ratio. From the testing process, it concludes that text compression with only Huffman coding method will be efficient when the number of text characters are above 400 characters, meanwhile text compression with burrows wheeler transform, move to front, and Huffman coding methods will be efficient when the number of text characters are above 531 characters. Combination of these methods are more efficient than just Huffman coding when the number of text characters are above 979 characters. The more characters that are compressed and the more patterns of the same symbol, the better the compression ratio.

  12. Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation

    NASA Astrophysics Data System (ADS)

    Schneider, Evan E.; Robertson, Brant E.

    2016-07-01

    Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.

  13. STEALTH: a Lagrange explicit finite difference code for solids, structural, and thermohydraulic analysis. Volume 7: implicit hydrodynamics. Computer code manual. [PWR; BWR

    SciTech Connect

    McKay, M.W.

    1982-06-01

    STEALTH is a family of computer codes that solve the equations of motion for a general continuum. These codes can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The versions of STEALTH described in this volume were designed for the calculation of problems involving low-speed fluid flow. They employ an implicit finite difference technique to solve the one- and two-dimensional equations of motion, written for an arbitrary coordinate system, for both incompressible and compressible fluids. The solution technique involves an iterative solution of the implicit, Lagrangian finite difference equations. Convection terms that result from the use of an arbitrarily-moving coordinate system are calculated separately. This volume provides the theoretical background, the finite difference equations, and the input instructions for the one- and two-dimensional codes; a discussion of several sample problems; and a listing of the input decks required to run those problems.

  14. End-to-end quality measure for transmission of compressed imagery over a noisy coded channel

    NASA Technical Reports Server (NTRS)

    Korwar, V. N.; Lee, P. J.

    1981-01-01

    For the transmission of imagery at high data rates over large distances with limited power and system gain, it is usually necessary to compress the data before transmitting it over a noisy channel that uses channel coding to reduce the effect of noise introduced errors. Both compression and channel noise introduce distortion into the imagery. In order to design a communication link that provides adequate quality of received images, it is necessary first to define some suitable distortion measure that accounts for both these kinds of distortion and then to perform various tradeoffs to arrive at system parameter values that will provide a sufficiently low level of received image distortion. The overall mean square error is used as the distortion measure and a description of how to perform these tradeoffs are included.

  15. A segmentation-based lossless image coding method for high-resolution medical image compression.

    PubMed

    Shen, L; Rangayyan, R M

    1997-06-01

    Lossless compression techniques are essential in archival and communication of medical images. In this paper, a new segmentation-based lossless image coding (SLIC) method is proposed, which is based on a simple but efficient region growing procedure. The embedded region growing procedure produces an adaptive scanning pattern for the image with the help of a very-few-bits-needed discontinuity index map. Along with this scanning pattern, an error image data part with a very small dynamic range is generated. Both the error image data and the discontinuity index map data parts are then encoded by the Joint Bi-level Image experts Group (JBIG) method. The SLIC method resulted in, on the average, lossless compression to about 1.6 h/pixel from 8 b, and to about 2.9 h/pixel from 10 b with a database of ten high-resolution digitized chest and breast images. In comparison with direct coding by JBIG, Joint Photographic Experts Group (JPEG), hierarchical interpolation (HINT), and two-dimensional Burg Prediction plus Huffman error coding methods, the SLIC method performed better by 4% to 28% on the database used. PMID:9184892

  16. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    PubMed

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  17. Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective

    PubMed Central

    Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey

    2014-01-01

    High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235

  18. Mutual information-based context template modeling for bitplane coding in remote sensing image compression

    NASA Astrophysics Data System (ADS)

    Zhang, Yongfei; Cao, Haiheng; Jiang, Hongxu; Li, Bo

    2016-04-01

    As remote sensing image applications are often characterized with limited bandwidth and high-quality demands, higher coding performance of remote sensing images are desirable. The embedded block coding with optimal truncation (EBCOT) is the fundamental part of JPEG2000 image compression standard. However, EBCOT only considers correlation within a sub-band and utilizes a context template of eight spatially neighboring coefficients in prediction. The existing optimization methods in literature using the current context template prove little performance improvements. To address this problem, this paper presents a new mutual information (MI)-based context template selection and modeling method. By further considering the correlation across the sub-bands, the potential prediction coefficients, including neighbors, far neighbors, parent and parent neighbors, are comprehensively examined and selected in such a manner that achieves a nice trade-off between the MI-based correlation criterion and the prediction complexity. Based on the selected context template, a high-order prediction model, which jointly considers the weight and the significance state of each coefficient, is proposed. Experimental results show that the proposed algorithm consistently outperforms the benchmark JPEG2000 standard and state-of-the-art algorithms in term of coding efficiency at a competitive computational cost, which makes it desirable in real-time compression applications, especially for remote sensing images.

  19. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    NASA Astrophysics Data System (ADS)

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  20. Belief Propagation for Error Correcting Codes and Lossy Compression Using Multilayer Perceptrons

    NASA Astrophysics Data System (ADS)

    Mimura, Kazushi; Cousseau, Florent; Okada, Masato

    2011-03-01

    The belief propagation (BP) based algorithm is investigated as a potential decoder for both of error correcting codes and lossy compression, which are based on non-monotonic tree-like multilayer perceptron encoders. We discuss that whether the BP can give practical algorithms or not in these schemes. The BP implementations in those kind of fully connected networks unfortunately shows strong limitation, while the theoretical results seems a bit promising. Instead, it reveals it might have a rich and complex structure of the solution space via the BP-based algorithms.

  1. Finite element modeling of magnetic compression using coupled electromagnetic-structural codes

    SciTech Connect

    Hainsworth, G.; Leonard, P.J.; Rodger, D.; Leyden, C.

    1996-05-01

    A link between the electromagnetic code, MEGA, and the structural code, DYNA3D has been developed. Although the primary use of this is for modelling of Railgun components, it has recently been applied to a small experimental Coilgun at Bath. The performance of Coilguns is very dependent on projectile material conductivity, and so high purity aluminium was investigated. However, due to its low strength, it is crushed significantly by magnetic compression in the gun. Although impractical as a real projectile material, this provides useful benchmark experimental data on high strain rate plastic deformation caused by magnetic forces. This setup is equivalent to a large scale version of the classic jumping ring experiment, where the ring jumps with an acceleration of 40 kG.

  2. Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.

    PubMed

    Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph

    2016-04-18

    We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331

  3. A lossless multichannel bio-signal compression based on low-complexity joint coding scheme for portable medical devices.

    PubMed

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  4. A Lossless Multichannel Bio-Signal Compression Based on Low-Complexity Joint Coding Scheme for Portable Medical Devices

    PubMed Central

    Kim, Dong-Sun; Kwon, Jin-San

    2014-01-01

    Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900

  5. Multispectral image compression for spectral and color reproduction based on lossy to lossless coding

    NASA Astrophysics Data System (ADS)

    Shinoda, Kazuma; Murakami, Yuri; Yamaguchi, Masahiro; Ohyama, Nagaaki

    2010-01-01

    In this paper we propose a multispectral image compression based on lossy to lossless coding, suitable for both spectral and color reproduction. The proposed method divides a multispectral image data into two groups, RGB and residual. The RGB component is extracted from the multispectral image, for example, by using the XYZ Color Matching Functions, a color conversion matrix, and a gamma curve. The original multispectral image is estimated from RGB data encoder, and the difference between the original and the estimated multispectral images, referred as a residual component in this paper, is calculated in the encoder. Then the RGB and the residual components are encoded by JPEG2000, respectively a progressive decoding is possible from the losslessly encoded code-stream. Experimental results show that, although the proposed method is slightly inferior to JPEG2000 with a multicomponent transform in rate-distortion plot of the spectrum domain at low bit rate, a decoded RGB image shows high quality at low bit rate with primary encoding of the RGB component. Its lossless compression ratio is close to that of JPEG2000 with the integer KLT.

  6. A New Multi-dimensional General Relativistic Neutrino Hydrodynamic Code for Core-collapse Supernovae. I. Method and Code Tests in Spherical Symmetry

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald

    2010-07-01

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the "ray-by-ray plus" approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  7. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMIC CODE FOR CORE-COLLAPSE SUPERNOVAE. I. METHOD AND CODE TESTS IN SPHERICAL SYMMETRY

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald E-mail: thj@mpa-garching.mpg.d

    2010-07-15

    We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in

  8. ZEUS-2D: A radiation magnetohydrodynamics code for astrophysical flows in two space dimensions. I - The hydrodynamic algorithms and tests.

    NASA Astrophysics Data System (ADS)

    Stone, James M.; Norman, Michael L.

    1992-06-01

    A detailed description of ZEUS-2D, a numerical code for the simulation of fluid dynamical flows including a self-consistent treatment of the effects of magnetic fields and radiation transfer is presented. Attention is given to the hydrodynamic (HD) algorithms which form the foundation for the more complex MHD and radiation HD algorithms. The effect of self-gravity on the flow dynamics is accounted for by an iterative solution of the sparse-banded matrix resulting from discretizing the Poisson equation in multidimensions. The results of an extensive series of HD test problems are presented. A detailed description of the MHD algorithms in ZEUS-2D is presented. A new method of computing the electromotive force is developed using the method of characteristics (MOC). It is demonstrated through the results of an extensive series of MHD test problems that the resulting hybrid MOC-constrained transport method provides for the accurate evolution of all modes of MHD wave families.

  9. User's manual for DYNA2D: an explicit two-dimensional hydrodynamic finite-element code with interactive rezoning

    SciTech Connect

    Hallquist, J.O.

    1982-02-01

    This revised report provides an updated user's manual for DYNA2D, an explicit two-dimensional axisymmetric and plane strain finite element code for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 4-node solid elements, and the equations-of motion are integrated by the central difference method. An interactive rezoner eliminates the need to terminate the calculation when the mesh becomes too distorted. Rather, the mesh can be rezoned and the calculation continued. The command structure for the rezoner is described and illustrated by an example.

  10. Analysis of prediction algorithms for residual compression in a lossy to lossless scalable video coding system based on HEVC

    NASA Astrophysics Data System (ADS)

    Heindel, Andreas; Wige, Eugen; Kaup, André

    2014-09-01

    Lossless image and video compression is required in many professional applications. However, lossless coding results in a high data rate, which leads to a long wait for the user when the channel capacity is limited. To overcome this problem, scalable lossless coding is an elegant solution. It provides a fast accessible preview by a lossy compressed base layer, which can be refined to a lossless output when the enhancement layer is received. Therefore, this paper presents a lossy to lossless scalable coding system where the enhancement layer is coded by means of intra prediction and entropy coding. Several algorithms are evaluated for the prediction step in this paper. It turned out that Sample-based Weighted Prediction is a reasonable choice for usual consumer video sequences and the Median Edge Detection algorithm is better suited for medical content from computed tomography. For both types of sequences the efficiency may be further improved by the much more complex Edge-Directed Prediction algorithm. In the best case, in total only about 2.7% additional data rate has to be invested for scalable coding compared to single-layer JPEG-LS compression for usual consumer video sequences. For the case of the medical sequences scalable coding is even more efficient than JPEG-LS compression for certain values of QP.

  11. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    SciTech Connect

    Milovich, J. L. Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-15

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  12. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    NASA Astrophysics Data System (ADS)

    Milovich, J. L.; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.

    2015-12-01

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm2, but with significantly lower total neutron yields (between 1.5 × 1014 and 5.5 × 1014) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the "high-foot" experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3-10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm2. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  13. Effects of thermal fluctuations and fluid compressibility on hydrodynamic synchronization of microrotors at finite oscillatory Reynolds number: a multiparticle collision dynamics simulation study.

    PubMed

    Theers, Mario; Winkler, Roland G

    2014-08-28

    We investigate the emergent dynamical behavior of hydrodynamically coupled microrotors by means of multiparticle collision dynamics (MPC) simulations. The two rotors are confined in a plane and move along circles driven by active forces. Comparing simulations to theoretical results based on linearized hydrodynamics, we demonstrate that time-dependent hydrodynamic interactions lead to synchronization of the rotational motion. Thermal noise implies large fluctuations of the phase-angle difference between the rotors, but synchronization prevails and the ensemble-averaged time dependence of the phase-angle difference agrees well with analytical predictions. Moreover, we demonstrate that compressibility effects lead to longer synchronization times. In addition, the relevance of the inertia terms of the Navier-Stokes equation are discussed, specifically the linear unsteady acceleration term characterized by the oscillatory Reynolds number ReT. We illustrate the continuous breakdown of synchronization with the Reynolds number ReT, in analogy to the continuous breakdown of the scallop theorem with decreasing Reynolds number. PMID:25011003

  14. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  15. Piecewise spectrally band-pass for compressive coded aperture spectral imaging

    NASA Astrophysics Data System (ADS)

    Qian, Lu-Lu; Lü, Qun-Bo; Huang, Min; Xiang, Li-Bin

    2015-08-01

    Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. Project supported by the National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 61225024) and the National High Technology Research and Development Program of China (Grant No. 2011AA7012022).

  16. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    SciTech Connect

    Chertkov, Michael; Chilappagari, Shashi K; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  17. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    NASA Astrophysics Data System (ADS)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  18. Giant impacts during planet formation: Parallel tree code simulations using smooth particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cohen, Randi L.

    There is both theoretical and observational evidence that giant planets collided with objects ≥ Mearth during their evolution. These impacts may play a key role in giant planet formation. This paper describes impacts of a ˜ Earth-mass object onto a suite of proto-giant-planets, as simulated using an SPH parallel tree code. We run 6 simulations, varying the impact angle and evolutionary stage of the proto-Jupiter. We find that it is possible for an impactor to free some mass from the core of the proto-planet it impacts through direct collision, as well as to make physical contact with the core yet escape partially, or even completely, intact. None of the 6 cases we consider produced a solid disk or resulted in a net decrease in the core mass of the pinto-planet (since the mass decrease due to disruption was outweighed by the increase due to the addition of the impactor's mass to the core). However, we suggest parameters which may have these effects, and thus decrease core mass and formation time in protoplanetary models and/or create satellite systems. We find that giant impacts can remove significant envelope mass from forming giant planets, leaving only 2 MEarth of gas, similar to Uranus and Neptune. They can also create compositional inhomogeneities in planetary cores, which creates differences in planetary thermal emission characteristics.

  19. Giant Impacts During Planet Formation: Parallel Tree Code Simulations Using Smooth Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cohen, R.; Bodenheimer, P.; Asphaug, E.

    2000-12-01

    There is both theoretical and observational evidence that giant planets collided with objects with mass >= Mearth during their evolution. These impacts may help shorten planetary formation timescales by changing the opacity of the planetary atmosphere to allow quicker cooling. They may also redistribute heavy metals within giant planets, affect the core/envelope mass ratio, and help determine the ratio of emitted to absorbed energy within giant planets. Thus, the researchers propose to simulate the impact of a ~ Earth-mass object onto a proto-giant-planet with SPH. Results of the SPH collision models will be input into a steady-state planetary evolution code and the effect of impacts on formation timescales, core/envelope mass ratios, density profiles, and thermal emissions of giant planets will be quantified. The collision will be modelled using a modified version of an SPH routine which simulates the collision of two polytropes. The Saumon-Chabrier and Tillotson equations of state will replace the polytropic equation of state. The parallel tree algorithm of Olson & Packer will be used for the domain decomposition and neighbor search necessary to calculate pressure and self-gravity efficiently. This work is funded by the NASA Graduate Student Researchers Program.

  20. The Rice coding algorithm achieves high-performance lossless and progressive image compression based on the improving of integer lifting scheme Rice coding algorithm

    NASA Astrophysics Data System (ADS)

    Jun, Xie Cheng; Su, Yan; Wei, Zhang

    2006-08-01

    In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.

  1. Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates

    NASA Technical Reports Server (NTRS)

    Deane, Anil E.

    1996-01-01

    Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.

  2. Contour-Based Image Compression for Fast Real-Time Coding

    NASA Astrophysics Data System (ADS)

    Vasilyev, Sergei

    A new method based on simultaneous contouring the image content with subsequent converting of the contours to a compact chained bit-flow, thus providing efficient spatial image compression, is proposed. It is computationally inexpensive and can be directly applied to compressing the high-resolution bitonal imagery, allowing to approach the ultimate speed performance. Combining the method with other compression schemes, for example, Huffman-type or arithmetic encoding, provides better lossless compression to the current telecommunication compression standards. The problems of method application to compressing the color images for remote sensing and mapping applications, as well as lossy method implementation, are discussed.

  3. Binary neutron-star mergers with Whisky and SACRA: First quantitative comparison of results from independent general-relativistic hydrodynamics codes

    NASA Astrophysics Data System (ADS)

    Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro

    2010-09-01

    We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.

  4. An optimal unequal error protection scheme with turbo product codes for wavelet compression of ultraspectral sounder data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Sriraja, Y.; Ahuja, Alok; Goldberg, Mitchell D.

    2006-08-01

    Most source coding techniques generate bitstream where different regions have unequal influences on data reconstruction. An uncorrected error in a more influential region can cause more error propagation in the reconstructed data. Given a limited bandwidth, unequal error protection (UEP) via channel coding with different code rates for different regions of bitstream may yield much less error contamination than equal error protection (EEP). We propose an optimal UEP scheme that minimizes error contamination after channel and source decoding. We use JPEG2000 for source coding and turbo product code (TPC) for channel coding as an example to demonstrate this technique with ultraspectral sounder data. Wavelet compression yields unequal significance in different wavelet resolutions. In the proposed UEP scheme, the statistics of erroneous pixels after TPC and JPEG2000 decoding are used to determine the optimal channel code rates for each wavelet resolution. The proposed UEP scheme significantly reduces the number of pixel errors when compared to its EEP counterpart. In practice, with a predefined set of implementation parameters (available channel codes, desired code rate, noise level, etc.), the optimal code rate allocation for UEP needs to be determined only once and can be done offline.

  5. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  6. Dynamic fission instabilities in rapidly rotating n = 3/2 polytropes - A comparison of results from finite-difference and smoothed particle hydrodynamics codes

    SciTech Connect

    Durisen, R.H.; Gingold, R.A.; Tohline, J.E.; Boss, A.P.

    1986-06-01

    The effectiveness of three different hydrodynamics models is evaluated for the analysis of the effects of fission instabilities in rapidly rotating, equilibrium flows. The instabilities arise in nonaxisymmetric Kelvin modes as rotational energy in the flow increases, which may occur in the formation of close binary stars and planets when the fluid proto-object contracts quasi-isostatically. Two finite-difference, donor-cell methods and a smoothed particle hydrodynamics (SPH) code are examined, using a polytropic index of 3/2 and ratios of total rotational kinetic energy to gravitational energy of 0.33 and 0.38. The models show that dynamic bar instabilities with the 3/2 polytropic index do not yield detached binaries and multiple systems. Ejected mass and angular momentum form two trailing spiral arms that become a disk or ring around the central remnant. The SPH code yields the same data as the finite difference codes but with less computational effort and without acceptable fluid constraints in low density regions. Methods for improving both types of codes are discussed. 68 references.

  7. A multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, Aleksei I. Offner, Stella S.R.

    2008-01-10

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({psi}tc). We analyze the magnitude of the {psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  8. A Multigroup diffusion Solver Using Pseudo Transient Continuation for a Radiaiton-Hydrodynamic Code with Patch-Based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2007-03-02

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  9. A multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    NASA Astrophysics Data System (ADS)

    Shestakov, Aleksei I.; Offner, Stella S. R.

    2008-01-01

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate "level-solve" packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation (Ψtc). We analyze the magnitude of the Ψtc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the "partial temperature" scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of Ψtc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates the

  10. A Multigroup diffusion solver using pseudo transient continuation for a radiation-hydrodynamic code with patch-based AMR

    SciTech Connect

    Shestakov, A I; Offner, S R

    2006-09-21

    We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates

  11. 3D Hydrodynamic Simulations with Yguazú-A Code to Model a Jet in a Galaxy Cluster

    NASA Astrophysics Data System (ADS)

    Haro-Corzo, S. A. R.; Velazquez, P.; Diaz, A.

    2009-05-01

    We present preliminary results for a galaxy's jet expanding into an intra-cluster medium (ICM). We attempt to model the jet-gas interaction and the evolution of a extragalactic collimated jet placed at center of computational grid, which it is modeled as a cylinder ejecting gas in the z-axis direction with fixed velocity. It has precession motion around z-axis (period of 10^5 sec.) and orbital motion in XY-plane (period of 500 yr.). This jet is embedded in the ICM, which is modeled as surrounding wind in the XZ plane. We carried out 3D hydrodynamical simulations using Yguazú-A code. This simulation do not include radiative losses. In order to compare the numerical results with observations, we generated synthetic X-ray emission images. X-ray observations with high-resolution of rich cluster of galaxies show diffuse emission with filamentary structure (sometimes called as cooling flow or X-ray filament). Radio observations show a jet-like emission of the central region of the cluster. Joining these observations, in this work we explore the possibility that the jet-ambient gas interaction leads to a filamentary morphology in the X-ray domain. We have found that simulation considering orbital motion offers the possibility to explain the diffuse emission observed in the X-ray domain. The circular orbital motion, additional to precession motion, contribute to disperse the shocked gas and the X-ray appearance of the 3D simulation reproduce some important details of Abel 1795 X-ray emission (Rodriguez-Martinez et al. 2006, A&A, 448, 15): A bright bow-shock at north (spot), where interact directly the jet and the ICM and which is observed in the X-ray image. Meanwhile, in the south side there is no bow-shock X-ray emission, but the wake appears as a X-ray source. This wake is part of the diffuse shocked ambient gas region.

  12. Development of a Three-Dimensional PSE Code for Compressible Flows: Stability of Three-Dimensional Compressible Boundary Layers

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Jeyasingham, Samarasingham

    1999-01-01

    A program is developed to investigate the linear stability of three-dimensional compressible boundary layer flows over bodies of revolutions. The problem is formulated as a two dimensional (2D) eigenvalue problem incorporating the meanflow variations in the normal and azimuthal directions. Normal mode solutions are sought in the whole plane rather than in a line normal to the wall as is done in the classical one dimensional (1D) stability theory. The stability characteristics of a supersonic boundary layer over a sharp cone with 50 half-angle at 2 degrees angle of attack is investigated. The 1D eigenvalue computations showed that the most amplified disturbances occur around x(sub 2) = 90 degrees and the azimuthal mode number for the most amplified disturbances range between m = -30 to -40. The frequencies of the most amplified waves are smaller in the middle region where the crossflow dominates the instability than the most amplified frequencies near the windward and leeward planes. The 2D eigenvalue computations showed that due to the variations in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the most amplified disturbances are shifted to 120 degrees compared to 90 degrees for the parallel theory. It is also observed that the nonparallel amplification rates are smaller than that is obtained from the parallel theory.

  13. Finite element stress analysis of a compression mold. Final report. [Using SASL and WILSON codes

    SciTech Connect

    Watterson, C.E.

    1980-03-01

    Thermally induced stresses occurring in a compression mold during production molding were evaluated using finite element analysis. A complementary experimental stress analysis, including strain gages and thermocouple arrays, verified the finite element model under typical loading conditions.

  14. Compression and smart coding of offset and gain maps for intraoral digital x-ray sensors

    SciTech Connect

    Frosio, I.; Borghese, N. A.

    2009-02-15

    The response of indirect x-ray digital imaging sensors is often not homogenous on the entire surface area. In this case, calibration is needed to build offset and gain maps, which are used to correct the sensor output. The sensors of new generation are equipped with an on-board memory, which serves to store these maps. However, because of its limited dimension, the maps have to be compressed before saving them. This step is critical because of the extremely high compression rate required. The authors propose here a novel method to achieve such a high compression rate, without degrading the quality of the sensor output. It is based on quad tree decomposition, which performs an adaptive sampling of the offset and gain maps, matched with a RBF-based interpolation strategy. The method was tested on a typical intraoral radiographic sensor and compared with traditional compression techniques. Qualitative and quantitative results show that the method achieves a higher compression rate and produces images of superior quality. The method can be adopted also in different fields where a high compression rate is required.

  15. Radiation Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Mihalas, Dimitri

    Hydrodynamics Front Fitting Artificial Dissipation The Adaptive Grid The TITAN Code References

  16. A New Multi-energy Neutrino Radiation-Hydrodynamics Code in Full General Relativity and Its Application to the Gravitational Collapse of Massive Stars

    NASA Astrophysics Data System (ADS)

    Kuroda, Takami; Takiwaki, Tomoya; Kotake, Kei

    2016-02-01

    We present a new multi-dimensional radiation-hydrodynamics code for massive stellar core-collapse in full general relativity (GR). Employing an M1 analytical closure scheme, we solve spectral neutrino transport of the radiation energy and momentum based on a truncated moment formalism. Regarding neutrino opacities, we take into account a baseline set in state-of-the-art simulations, in which inelastic neutrino-electron scattering, thermal neutrino production via pair annihilation, and nucleon-nucleon bremsstrahlung are included. While the Einstein field equations and the spatial advection terms in the radiation-hydrodynamics equations are evolved explicitly, the source terms due to neutrino-matter interactions and energy shift in the radiation moment equations are integrated implicitly by an iteration method. To verify our code, we first perform a series of standard radiation tests with analytical solutions that include the check of gravitational redshift and Doppler shift. A good agreement in these tests supports the reliability of the GR multi-energy neutrino transport scheme. We then conduct several test simulations of core-collapse, bounce, and shock stall of a 15{M}⊙ star in the Cartesian coordinates and make a detailed comparison with published results. Our code performs quite well to reproduce the results of full Boltzmann neutrino transport especially before bounce. In the postbounce phase, our code basically performs well, however, there are several differences that are most likely to come from the insufficient spatial resolution in our current 3D-GR models. For clarifying the resolution dependence and extending the code comparison in the late postbounce phase, we discuss that next-generation Exaflops class supercomputers are needed at least.

  17. Approximate message-passing with spatially coupled structured operators, with applications to compressed sensing and sparse superposition codes

    NASA Astrophysics Data System (ADS)

    Barbier, Jean; Schülke, Christophe; Krzakala, Florent

    2015-05-01

    We study the behavior of approximate message-passing (AMP), a solver for linear sparse estimation problems such as compressed sensing, when the i.i.d matrices—for which it has been specifically designed—are replaced by structured operators, such as Fourier and Hadamard ones. We show empirically that after proper randomization, the structure of the operators does not significantly affect the performances of the solver. Furthermore, for some specially designed spatially coupled operators, this allows a computationally fast and memory efficient reconstruction in compressed sensing up to the information-theoretical limit. We also show how this approach can be applied to sparse superposition codes, allowing the AMP decoder to perform at large rates for moderate block length.

  18. KIVA-4: An unstructured ALE code for compressible gas flow with sprays

    NASA Astrophysics Data System (ADS)

    Torres, David J.; Trujillo, Mario F.

    2006-12-01

    The KIVA family of codes was developed to simulate the thermal and fluid processes taking place inside an internal combustion engine. In this latest version of this open source code, KIVA-4, the numerics have been generalized to unstructrured meshes. This change required modifications to the Lagrangian phase of the computations, the pressure solution and fundamental changes in the fluxing schemes of the rezoning phase. This newest version of the code inherits all the droplet phase capabilities and physical sub-models of previous versions. The integration of the gas phase equations with moving solid boundaries continues to employ the successful arbitrary Lagrangian-Eulerian (ALE) methodology. Its new unstructured capability facilitates grid construction in complicated geometries and affords a higher degree of flexibility. The numerics of the code, emphasizing the new additions, are described. Various computational examples are performed demonstrating the new capabilities of the code.

  19. Image and video compression/decompression based on human visual perception system and transform coding

    SciTech Connect

    Fu, Chi Yung., Petrich, L.I., Lee, M.

    1997-02-01

    The quantity of information has been growing exponentially, and the form and mix of information have been shifting into the image and video areas. However, neither the storage media nor the available bandwidth can accommodated the vastly expanding requirements for image information. A vital, enabling technology here is compression/decompression. Our compression work is based on a combination of feature-based algorithms inspired by the human visual- perception system (HVS), and some transform-based algorithms (such as our enhanced discrete cosine transform, wavelet transforms), vector quantization and neural networks. All our work was done on desktop workstations using the C++ programming language and commercially available software. During FY 1996, we explored and implemented an enhanced feature-based algorithms, vector quantization, and neural- network-based compression technologies. For example, we improved the feature compression for our feature-based algorithms by a factor of two to ten, a substantial improvement. We also found some promising results when using neural networks and applying them to some video sequences. In addition, we also investigated objective measures to characterize compression results, because traditional means such as the peak signal- to-noise ratio (PSNR) are not adequate to fully characterize the results, since such measures do not take into account the details of human visual perception. We have successfully used our one- year LDRD funding as seed money to explore new research ideas and concepts, the results of this work have led us to obtain external funding from the dud. At this point, we are seeking matching funds from DOE to match the dud funding so that we can bring such technologies into fruition. 9 figs., 2 tabs.

  20. Hydrodynamic effects in the atmosphere of variable stars

    NASA Technical Reports Server (NTRS)

    Davis, C. G., Jr.; Bunker, S. S.

    1975-01-01

    Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.

  1. Lossless compression scheme of superhigh-definition images by partially decodable Golomb-Rice code

    NASA Astrophysics Data System (ADS)

    Kato, Shigeo; Hasegawa, Madoka; Guo, Muling

    1998-12-01

    Multimedia communication systems using super high definition (SHD) images are widely desired in various communities such as medical imagery, digital museum, digital libraries and so on. There are, however, many requirements in SHD image communication systems, because of high pixel accuracy and high resolution of a SHD image. We considered mandatory functions that should be realized in SHD image application systems, as summarized to three items, i.e, reversibility, scalability and progressibility. This paper proposes an SHD image communication systems based on reversibility, scalability and progressibility. To realize reversibility and progressibility, a lossless wavelet transform coding method is introduced as a coding model. To realize scalability, a partially decodable entropy code is proposed. Especially, we focus on a partially decodable coding method for realizing the scalability function in this paper.

  2. Application of wavelet filtering and Barker-coded pulse compression hybrid method to air-coupled ultrasonic testing

    NASA Astrophysics Data System (ADS)

    Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping

    2014-10-01

    Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.

  3. The role of molecular motors in the mechanics of active gels and the effects of inertia, hydrodynamic interaction and compressibility in passive microrheology

    NASA Astrophysics Data System (ADS)

    Uribe, Andres Cordoba

    The mechanical properties of soft biological materials are essential to their physiological function and cannot easily be duplicated by synthetic materials. The study of the mechanical properties of biological materials has lead to the development of new rheological characterization techniques. In the technique called passive microbead rheology, the positional autocorrelation function of a micron-sized bead embedded in a viscoelastic fluid is used to infer the dynamic modulus of the fluid. Single particle microrheology is limited to fluids were the microstructure is much smaller than the size of the probe bead. To overcome this limitation in two-bead microrheology the cross-correlated thermal motion of pairs of tracer particles is used to determine the dynamic modulus. Here we present a time-domain data analysis methodology and generalized Brownian dynamics simulations to examine the effects of inertia, hydrodynamic interaction, compressibility and non-conservative forces in passive microrheology. A type of biological material that has proven specially challenging to characterize are active gels. They are formed by semiflexible polymer filaments driven by motor proteins that convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) to mechanical work and motion. Active gels perform essential functions in living tissue. Here we introduce a single-chain mean-field model to describe the mechanical properties of active gels. We model the semiflexible filaments as bead-spring chains and the molecular motors are accounted for by using a mean-field approach. The level of description of the model includes the end-to-end length and attachment state of the filaments, and the motor-generated forces, as stochastic state variables which evolve according to a proposed differential Chapman-Kolmogorov equation. The model allows accounting for physics that are not available in models that have been postulated on coarser levels of description. Moreover it allows

  4. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

  5. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-03-10

    A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

  6. Radiation hydrodynamics

    SciTech Connect

    Pomraning, G.C.

    1982-12-31

    This course was intended to provide the participant with an introduction to the theory of radiative transfer, and an understanding of the coupling of radiative processes to the equations describing compressible flow. At moderate temperatures (thousands of degrees), the role of the radiation is primarily one of transporting energy by radiative processes. At higher temperatures (millions of degrees), the energy and momentum densities of the radiation field may become comparable to or even dominate the corresponding fluid quantities. In this case, the radiation field significantly affects the dynamics of the fluid, and it is the description of this regime which is generally the charter of radiation hydrodynamics. The course provided a discussion of the relevant physics and a derivation of the corresponding equations, as well as an examination of several simplified models. Practical applications include astrophysics and nuclear weapons effects phenomena.

  7. Scaling and performance of a 3-D radiation hydrodynamics code on message-passing parallel computers: final report

    SciTech Connect

    Hayes, J C; Norman, M

    1999-10-28

    This report details an investigation into the efficacy of two approaches to solving the radiation diffusion equation within a radiation hydrodynamic simulation. Because leading-edge scientific computing platforms have evolved from large single-node vector processors to parallel aggregates containing tens to thousands of individual CPU's, the ability of an algorithm to maintain high compute efficiency when distributed over a large array of nodes is critically important. The viability of an algorithm thus hinges upon the tripartite question of numerical accuracy, total time to solution, and parallel efficiency.

  8. Fast minimum-redundancy prefix coding for real-time space data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin

    2007-09-01

    The minimum-redundancy prefix-free code problem is to determine an array l = {l I ,..., f n} of n integer codeword lengths, given an array f = {f I ,..., f n} of n symbol occurrence frequencies, such that the Kraft-McMillan inequality [equation] holds and the number of the total coded bits [equation] is minimized. Previous minimum-redundancy prefix-free code based on Huffman's greedy algorithm solves this problem in O (n) time if the input array f is sorted; but in O (n log n) time if f is unsorted. In this paper a fast algorithm is proposed to solve this problem in linear time if f is unsorted. It is suitable for real-time applications in satellite communication and consumer electronics. We also develop its VLSI architecture that consists of four modules, namely, the frequency table builder, the codeword length table builder, the codeword table builder, and the input-to-codeword mapper.

  9. Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1995-01-01

    Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.

  10. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  11. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  12. Development of a Fast Breeder Reactor Fuel Bundle-Duct Interaction Analysis Code - BAMBOO: Analysis Model and Validation by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Tanaka, Kosuke

    2001-10-15

    To analyze the wire-wrapped fast breeder reactor (FBR) fuel pin bundle deformation under bundle-duct interaction (BDI) conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. A three-dimensional beam element model is used in this code to calculate fuel pin bowing and cladding oval distortion, which are the dominant deformation mechanisms in a fuel pin bundle. In this work, the property of the cladding oval distortion considering the wire-pitch was evaluated experimentally and introduced in the code analysis.The BAMBOO code was validated in this study by using an out-of-pile bundle compression testing apparatus and comparing these results with the code results. It is concluded that BAMBOO reasonably predicts the pin-to-duct clearances in the compression tests by treating the cladding oval distortion as the suppression mechanism to BDI.

  13. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  14. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  15. SIMULATING THE COMMON ENVELOPE PHASE OF A RED GIANT USING SMOOTHED-PARTICLE HYDRODYNAMICS AND UNIFORM-GRID CODES

    SciTech Connect

    Passy, Jean-Claude; Mac Low, Mordecai-Mark; De Marco, Orsola; Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel; Herwig, Falk; Oishi, Jeffrey S.; Bryan, Greg L.

    2012-01-01

    We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.

  16. Compressed X-ray phase-contrast imaging using a coded source

    NASA Astrophysics Data System (ADS)

    Sung, Yongjin; Xu, Ling; Nagarkar, Vivek; Gupta, Rajiv

    2014-12-01

    X-ray phase-contrast imaging (XPCI) holds great promise for medical X-ray imaging with high soft-tissue contrast. Obviating optical elements in the imaging chain, propagation-based XPCI (PB-XPCI) has definite advantages over other XPCI techniques in terms of cost, alignment and scalability. However, it imposes strict requirements on the spatial coherence of the source and the resolution of the detector. In this study, we demonstrate that using a coded X-ray source and sparsity-based reconstruction, we can significantly relax these requirements. Using numerical simulation, we assess the feasibility of our approach and study the effect of system parameters on the reconstructed image. The results are demonstrated with images obtained using a bench-top micro-focus XPCI system.

  17. Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics

    DOE PAGESBeta

    Laney, Daniel; Langer, Steven; Weber, Christopher; Lindstrom, Peter; Wegener, Al

    2014-01-01

    This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less

  18. Recent Advances in the Modeling of the Transport of Two-Plasmon-Decay Electrons in the 1-D Hydrodynamic Code LILAC

    NASA Astrophysics Data System (ADS)

    Delettrez, J. A.; Myatt, J. F.; Yaakobi, B.

    2015-11-01

    The modeling of the fast-electron transport in the 1-D hydrodynamic code LILAC was modified because of the addition of cross-beam-energy-transfer (CBET) in implosion simulations. Using the old fast-electron with source model CBET results in a shift of the peak of the hard x-ray (HXR) production from the end of the laser pulse, as observed in experiments, to earlier in the pulse. This is caused by a drop in the laser intensity of the quarter-critical surface from CBET interaction at lower densities. Data from simulations with the laser plasma simulation environment (LPSE) code will be used to modify the source algorithm in LILAC. In addition, the transport model in LILAC has been modified to include deviations from the straight-line algorithm and non-specular reflection at the sheath to take into account the scattering from collisions and magnetic fields in the corona. Simulation results will be compared with HXR emissions from both room-temperature plastic and cryogenic target experiments. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  19. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. IV. The Neutrino Signal

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas

    2014-06-01

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\

  20. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE. II. RELATIVISTIC EXPLOSION MODELS OF CORE-COLLAPSE SUPERNOVAE

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  1. A New Multi-dimensional General Relativistic Neutrino Hydrodynamics Code for Core-collapse Supernovae. II. Relativistic Explosion Models of Core-collapse Supernovae

    NASA Astrophysics Data System (ADS)

    Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas

    2012-09-01

    We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.

  2. Byte structure variable length coding (BS-VLC): a new specific algorithm applied in the compression of trajectories generated by molecular dynamics

    PubMed

    Melo; Puga; Gentil; Brito; Alves; Ramos

    2000-05-01

    Molecular dynamics is a well-known technique very much used in the study of biomolecular systems. The trajectory files produced by molecular dynamics simulations are extensive, and the classical lossless algorithms give poor efficiencies in their compression. In this work, a new specific algorithm, named byte structure variable length coding (BS-VLC), is introduced. Trajectory files, obtained by molecular dynamics applied to trypsin and a trypsin:pancreatic trypsin inhibitor complex, were compressed using four classical lossless algorithms (Huffman, adaptive Huffman, LZW, and LZ77) as well as the BS-VLC algorithm. The results obtained show that BS-VLC nearly triplicates the compression efficiency of the best classical lossless algorithm, preserving a near lossless behavior. Compression efficiencies close to 50% can be obtained with a high degree of precision, and the maximum efficiency possible (75%), within this algorithm, can be performed with good precision. PMID:10850759

  3. An investigative study of multispectral data compression for remotely-sensed images using vector quantization and difference-mapped shift-coding

    NASA Technical Reports Server (NTRS)

    Jaggi, S.

    1993-01-01

    A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.

  4. Burst error performance of 3DWT-RVLC with low-density parity-check codes for ultraspectral sounder data compression

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Ahuja, Alok; Wang, Charles C.; Goldberg, Mitchell D.

    2006-08-01

    Previous study shows 3D Wavelet Transform with Reversible Variable-Length Coding (3DWT-RVLC) has much better error resilience than JPEG2000 Part 2 on 1-bit random error remaining after channel decoding. Errors in satellite channels might have burst characteristics. Low-density parity-check (LDPC) codes are known to have excellent error correction capability near the Shannon limit performance. In this study, we investigate the burst error correction performance of LDPC codes via the new Digital Video Broadcasting - Second Generation (DVB-S2) standard for ultraspectral sounder data compressed by 3DWT-RVLC. We also study the error contamination after 3DWT-RVLC source decoding. Statistics show that 3DWT-RVLC produces significantly fewer erroneous pixels than JPEG2000 Part 2 for ultraspectral sounder data compression.

  5. Compression embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.

    1998-01-01

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

  6. Compression embedding

    DOEpatents

    Sandford, M.T. II; Handel, T.G.; Bradley, J.N.

    1998-07-07

    A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

  7. A new multi-dimensional general relativistic neutrino hydrodynamics code for core-collapse supernovae. IV. The neutrino signal

    SciTech Connect

    Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de

    2014-06-10

    Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.

  8. Vector quantization with self-resynchronizing coding for lossless compression and rebroadcast of the NASA Geostationary Imaging Fourier Transform Spectrometer (GIFTS) data

    NASA Astrophysics Data System (ADS)

    Huang, Bormin; Wei, Shih-Chieh; Huang, Hung-Lung; Smith, William L.; Bloom, Hal J.

    2008-08-01

    As part of NASA's New Millennium Program, the Geostationary Imaging Fourier Transform Spectrometer (GIFTS) is an advanced ultraspectral sounder with a 128x128 array of interferograms for the retrieval of such geophysical parameters as atmospheric temperature, moisture, and wind. With massive data volume that would be generated by future advanced satellite sensors such as GIFTS, chances are that even the state-of-the-art channel coding (e.g. Turbo codes, LDPC) with low BER might not correct all the errors. Due to the error-sensitive ill-posed nature of the retrieval problem, lossless compression with error resilience is desired for ultraspectral sounder data downlink and rebroadcast. Previously, we proposed the fast precomputed vector quantization (FPVQ) with arithmetic coding (AC) which can produce high compression gain for ground operation. In this paper we adopt FPVQ with the reversible variable-length coding (RVLC) to provide better resilience against satellite transmission errors remaining after channel decoding. The FPVQ-RVLC method is compared with the previous FPVQ-AC method for lossless compression of the GIFTS data. The experiment shows that the FPVQ-RVLC method is a significantly better tool for rebroadcast of massive ultraspectral sounder data.

  9. Progress in smooth particle hydrodynamics

    SciTech Connect

    Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.

    1998-07-01

    Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to

  10. TRHD: Three-temperature radiation-hydrodynamics code with an implicit non-equilibrium radiation transport using a cell-centered monotonic finite volume scheme on unstructured-grids

    NASA Astrophysics Data System (ADS)

    Sijoy, C. D.; Chaturvedi, S.

    2015-05-01

    Three-temperature (3T), unstructured-mesh, non-equilibrium radiation hydrodynamics (RHD) code have been developed for the simulation of intense thermal radiation or high-power laser driven radiative shock hydrodynamics in two-dimensional (2D) axis-symmetric geometries. The governing hydrodynamics equations are solved using a compatible unstructured Lagrangian method based on a control volume differencing (CVD) scheme. A second-order predictor-corrector (PC) integration scheme is used for the temporal discretization of the hydrodynamics equations. For the radiation energy transport, frequency averaged gray model is used in which the flux-limited diffusion (FLD) approximation is used to recover the free-streaming limit of the radiation propagation in optically thin regions. The proposed RHD model allows to have different temperatures for the electrons and ions. In addition to this, the electron and thermal radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. For this, the coupled flux limited electron heat conduction and the non-equilibrium radiation diffusion equations are solved simultaneously by using an implicit, axis-symmetric, cell-centered, monotonic, nonlinear finite volume (NLFV) scheme. In this paper, we have described the details of the 2D, 3T, non-equilibrium RHD code developed along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We have also conducted a performance analysis with different linearity preserving interpolation schemes that are used for the evaluation of the nodal values in the NLFV scheme. Finally, in order to demonstrate full capability of the code implementation, we have presented the simulation of laser driven thin Aluminum (Al) foil acceleration. The simulation results are found to be in good agreement

  11. Radiation Hydrodynamics Test Problems with Linear Velocity Profiles

    SciTech Connect

    Hendon, Raymond C.; Ramsey, Scott D.

    2012-08-22

    As an extension of the works of Coggeshall and Ramsey, a class of analytic solutions to the radiation hydrodynamics equations is derived for code verification purposes. These solutions are valid under assumptions including diffusive radiation transport, a polytropic gas equation of state, constant conductivity, separable flow velocity proportional to the curvilinear radial coordinate, and divergence-free heat flux. In accordance with these assumptions, the derived solution class is mathematically invariant with respect to the presence of radiative heat conduction, and thus represents a solution to the compressible flow (Euler) equations with or without conduction terms included. With this solution class, a quantitative code verification study (using spatial convergence rates) is performed for the cell-centered, finite volume, Eulerian compressible flow code xRAGE developed at Los Alamos National Laboratory. Simulation results show near second order spatial convergence in all physical variables when using the hydrodynamics solver only, consistent with that solver's underlying order of accuracy. However, contrary to the mathematical properties of the solution class, when heat conduction algorithms are enabled the calculation does not converge to the analytic solution.

  12. Verification of the FBR fuel bundle-duct interaction analysis code BAMBOO by the out-of-pile bundle compression test with large diameter pins

    NASA Astrophysics Data System (ADS)

    Uwaba, Tomoyuki; Ito, Masahiro; Nemoto, Junichi; Ichikawa, Shoichi; Katsuyama, Kozo

    2014-09-01

    The BAMBOO computer code was verified by results for the out-of-pile bundle compression test with large diameter pin bundle deformation under the bundle-duct interaction (BDI) condition. The pin diameters of the examined test bundles were 8.5 mm and 10.4 mm, which are targeted as preliminary fuel pin diameters for the upgraded core of the prototype fast breeder reactor (FBR) and for demonstration and commercial FBRs studied in the FaCT project. In the bundle compression test, bundle cross-sectional views were obtained from X-ray computer tomography (CT) images and local parameters of bundle deformation such as pin-to-duct and pin-to-pin clearances were measured by CT image analyses. In the verification, calculation results of bundle deformation obtained by the BAMBOO code analyses were compared with the experimental results from the CT image analyses. The comparison showed that the BAMBOO code reasonably predicts deformation of large diameter pin bundles under the BDI condition by assuming that pin bowing and cladding oval distortion are the major deformation mechanisms, the same as in the case of small diameter pin bundles. In addition, the BAMBOO analysis results confirmed that cladding oval distortion effectively suppresses BDI in large diameter pin bundles as well as in small diameter pin bundles.

  13. Skew resisting hydrodynamic seal

    DOEpatents

    Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.

    2001-01-01

    A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.

  14. X-ray radiographic imaging of hydrodynamic phenomena in radiation driven materials -- shock propagation, material compression and shear flow. Revision 1

    SciTech Connect

    Hammel, B.A.; Kilkenny, J.D.; Munro, D.; Remington, B.A.; Kornblum, H.N.; Perry, T.S.; Phillion, D.W.; Wallace, R.J.

    1994-02-01

    One- and two-dimensional, time resolved x-ray radiographic imaging at high photon energy (5-7 keV) is used to study shock propagation, material motion and compression, and the effects of shear flow in solid density samples which are driven by x-ray ablation with the Nova laser. By backlighting the samples with x-rays and observing the increase in sample areal density due to shock compression, the authors directly measure the trajectory of strong shocks ({approx}40 Mbar) in flight, in solid density plastic samples. Doping a section of the samples with high-Z material (Br) provides radiographic contrast, allowing the measurement of the shock induced particle motion. Instability growth due to shear flow at an interface is investigated by imbedding a metal wire in a cylindrical plastic sample and launching a shock in the axial direction. Time resolved radiographic measurements are made with either a slit-imager coupled to an x-ray streak camera or a pinhole camera coupled to a gated microchannel plate detector, providing {approx} 10-{mu}m spatial and {approx} 100-ps temporal resolution.

  15. X-ray radiographic imaging of hydrodynamic phenomena in radiation-driven materials---Shock propagation, material compression, and shear flow

    SciTech Connect

    Hammel, B.A.; Kilkenny, J.D.; Munro, D.; Remington, B.A.; Kornblum, H.N.; Perry, T.S.; Phillion, D.W.; Wallace, R.J. )

    1994-05-01

    One- and two-dimensional, time-resolved x-ray radiographic imaging at high photon energy (5--7 keV) is used to study shock propagation, material motion and compression, and the effects of shear flow in solid density samples which are driven by x-ray ablation with the Nova laser. By backlighting the samples with x rays and observing the increase in sample areal density due to shock compression, the trajectories of strong shocks ([similar to]40 Mbars) in flight are directly measured in solid density plastic samples. Doping a section of the samples with high-[ital Z] material (Br) provides radiographic contrast, allowing a measurement of the shock-induced particle motion. Instability growth due to shear flow at an interface is investigated by imbedding a metal wire in a cylindrical plastic sample and launching a shock in the axial direction. Time-resolved radiographic measurements are made with either a slit-imager coupled to an x-ray streak camera or a pinhole camera coupled to a gated microchannel plate detector, providing [similar to]10 [mu]m spatial and [similar to]100 ps temporal resolution.

  16. Development of a Fast Breeder Reactor Fuel Bundle Deformation Analysis Code - BAMBOO: Development of a Pin Dispersion Model and Verification by the Out-of-Pile Compression Test

    SciTech Connect

    Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu

    2004-02-15

    To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code.

  17. Compressible halftoning

    NASA Astrophysics Data System (ADS)

    Anderson, Peter G.; Liu, Changmeng

    2003-01-01

    We present a technique for converting continuous gray-scale images to halftone (black and white) images that lend themselves to lossless data compression with compression factor of three or better. Our method involves using novel halftone mask structures which consist of non-repeated threshold values. We have versions of both dispersed-dot and clustered-dot masks, which produce acceptable images for a variety of printers. Using the masks as a sort key allows us to reversibly rearrange the image pixels and partition them into groups with a highly skewed distribution allowing Huffman compression coding techniques to be applied. This gives compression ratios in the range 3:1 to 10:1.

  18. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  19. Ship Hydrodynamics

    ERIC Educational Resources Information Center

    Lafrance, Pierre

    1978-01-01

    Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)

  20. Radiation Hydrodynamics

    SciTech Connect

    Castor, J I

    2003-10-16

    The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish

  1. Isogeometric analysis of Lagrangian hydrodynamics

    NASA Astrophysics Data System (ADS)

    Bazilevs, Y.; Akkerman, I.; Benson, D. J.; Scovazzi, G.; Shashkov, M. J.

    2013-06-01

    Isogeometric analysis of Lagrangian shock hydrodynamics is proposed. The Euler equations of compressible hydrodynamics in the weak form are discretized using Non-Uniform Rational B-Splines (NURBS) in space. The discretization has all the advantages of a higher-order method, with the additional benefits of exact symmetry preservation and better per-degree-of-freedom accuracy. An explicit, second-order accurate time integration procedure, which conserves total energy, is developed and employed to advance the equations in time. The performance of the method is examined on a set of standard 2D and 3D benchmark examples, where good quality of the computational results is attained.

  2. Bacterial Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lauga, Eric

    2016-01-01

    Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells, yet they represent the bulk of the world's biomass and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micrometer scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically complex environments. Using hydrodynamics as an organizing framework, I review the biomechanics of bacterial motility and look ahead to future challenges.

  3. GENASIS: General Astrophysical Simulation System. I. Refinable Mesh and Nonrelativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony

    2014-02-01

    GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.

  4. Lossy Text Compression Techniques

    NASA Astrophysics Data System (ADS)

    Palaniappan, Venka; Latifi, Shahram

    Most text documents contain a large amount of redundancy. Data compression can be used to minimize this redundancy and increase transmission efficiency or save storage space. Several text compression algorithms have been introduced for lossless text compression used in critical application areas. For non-critical applications, we could use lossy text compression to improve compression efficiency. In this paper, we propose three different source models for character-based lossy text compression: Dropped Vowels (DOV), Letter Mapping (LMP), and Replacement of Characters (ROC). The working principles and transformation methods associated with these methods are presented. Compression ratios obtained are included and compared. Comparisons of performance with those of the Huffman Coding and Arithmetic Coding algorithm are also made. Finally, some ideas for further improving the performance already obtained are proposed.

  5. Hydrodynamic supercontinuum.

    PubMed

    Chabchoub, A; Hoffmann, N; Onorato, M; Genty, G; Dudley, J M; Akhmediev, N

    2013-08-01

    We report the experimental observation of multi-bound-soliton solutions of the nonlinear Schrödinger equation (NLS) in the context of hydrodynamic surface gravity waves. Higher-order N-soliton solutions with N=2, 3 are studied in detail and shown to be associated with self-focusing in the wave group dynamics and the generation of a steep localized carrier wave underneath the group envelope. We also show that for larger input soliton numbers, the wave group experiences irreversible spectral broadening, which we refer to as a hydrodynamic supercontinuum by analogy with optics. This process is shown to be associated with the fission of the initial multisoliton into individual fundamental solitons due to higher-order nonlinear perturbations to the NLS. Numerical simulations using an extended NLS model described by the modified nonlinear Schrödinger equation, show excellent agreement with experiment and highlight the universal role that higher-order nonlinear perturbations to the NLS play in supercontinuum generation. PMID:23952405

  6. Two-temperature hydrodynamics of laser-generated ultrashort shock waves in elasto-plastic solids

    NASA Astrophysics Data System (ADS)

    Ilnitsky, Denis K.; Khokhlov, Viktor A.; Inogamov, Nail A.; Zhakhovsky, Vasily V.; Petrov, Yurii V.; Khishchenko, Konstantin V.; Migdal, Kirill P.; Anisimov, Sergey I.

    2014-05-01

    Shock-wave generation by ultrashort laser pulses opens new doors for study of hidden processes in materials happened at an atomic-scale spatiotemporal scales. The poorly explored mechanism of shock generation is started from a short-living two-temperature (2T) state of solid in a thin surface layer where laser energy is deposited. Such 2T state represents a highly non-equilibrium warm dense matter having cold ions and hot electrons with temperatures of 1-2 orders of magnitude higher than the melting point. Here for the first time we present results obtained by our new hybrid hydrodynamics code combining detailed description of 2T states with a model of elasticity together with a wide-range equation of state of solid. New hydro-code has higher accuracy in the 2T stage than molecular dynamics method, because it includes electron related phenomena including thermal conduction, electron-ion collisions and energy transfer, and electron pressure. From the other hand the new code significantly improves our previous version of 2T hydrodynamics model, because now it is capable of reproducing the elastic compression waves, which may have an imprint of supersonic melting like as in MD simulations. With help of the new code we have solved a difficult problem of thermal and dynamic coupling of a molten layer with an uniaxially compressed elastic solid. This approach allows us to describe the recent femtosecond laser experiments.

  7. Hydrodynamic effects on coalescence.

    SciTech Connect

    Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael; Gorby, Allen D.; Brooks, Carlton, F.

    2006-10-01

    The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.

  8. Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hawley, John; Simon, Jake; Stone, James; Gardiner, Thomas; Teuben, Peter

    2015-05-01

    Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.

  9. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the

  10. A NEW MULTI-DIMENSIONAL GENERAL RELATIVISTIC NEUTRINO HYDRODYNAMICS CODE OF CORE-COLLAPSE SUPERNOVAE. III. GRAVITATIONAL WAVE SIGNALS FROM SUPERNOVA EXPLOSION MODELS

    SciTech Connect

    Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de

    2013-03-20

    We present a detailed theoretical analysis of the gravitational wave (GW) signal of the post-bounce evolution of core-collapse supernovae (SNe), employing for the first time relativistic, two-dimensional explosion models with multi-group, three-flavor neutrino transport based on the ray-by-ray-plus approximation. The waveforms reflect the accelerated mass motions associated with the characteristic evolutionary stages that were also identified in previous works: a quasi-periodic modulation by prompt post-shock convection is followed by a phase of relative quiescence before growing amplitudes signal violent hydrodynamical activity due to convection and the standing accretion shock instability during the accretion period of the stalled shock. Finally, a high-frequency, low-amplitude variation from proto-neutron star (PNS) convection below the neutrinosphere appears superimposed on the low-frequency trend associated with the aspherical expansion of the SN shock after the onset of the explosion. Relativistic effects in combination with detailed neutrino transport are shown to be essential for quantitative predictions of the GW frequency evolution and energy spectrum, because they determine the structure of the PNS surface layer and its characteristic g-mode frequency. Burst-like high-frequency activity phases, correlated with sudden luminosity increase and spectral hardening of electron (anti-)neutrino emission for some 10 ms, are discovered as new features after the onset of the explosion. They correspond to intermittent episodes of anisotropic accretion by the PNS in the case of fallback SNe. We find stronger signals for more massive progenitors with large accretion rates. The typical frequencies are higher for massive PNSs, though the time-integrated spectrum also strongly depends on the model dynamics.