VH-1: Multidimensional ideal compressible hydrodynamics code
NASA Astrophysics Data System (ADS)
Hawley, John; Blondin, John; Lindahl, Greg; Lufkin, Eric
2012-04-01
VH-1 is a multidimensional ideal compressible hydrodynamics code written in FORTRAN for use on any computing platform, from desktop workstations to supercomputers. It uses a Lagrangian remap version of the Piecewise Parabolic Method developed by Paul Woodward and Phil Colella in their 1984 paper. VH-1 comes in a variety of versions, from a simple one-dimensional serial variant to a multi-dimensional version scalable to thousands of processors.
Pencil: Finite-difference Code for Compressible Hydrodynamic Flows
NASA Astrophysics Data System (ADS)
Brandenburg, Axel; Dobler, Wolfgang
2010-10-01
The Pencil code is a high-order finite-difference code for compressible hydrodynamic flows with magnetic fields. It is highly modular and can easily be adapted to different types of problems. The code runs efficiently under MPI on massively parallel shared- or distributed-memory computers, like e.g. large Beowulf clusters. The Pencil code is primarily designed to deal with weakly compressible turbulent flows. To achieve good parallelization, explicit (as opposed to compact) finite differences are used. Typical scientific targets include driven MHD turbulence in a periodic box, convection in a slab with non-periodic upper and lower boundaries, a convective star embedded in a fully nonperiodic box, accretion disc turbulence in the shearing sheet approximation, self-gravity, non-local radiation transfer, dust particle evolution with feedback on the gas, etc. A range of artificial viscosity and diffusion schemes can be invoked to deal with supersonic flows. For direct simulations regular viscosity and diffusion is being used. The code is written in well-commented Fortran90.
Reliable estimation of shock position in shock-capturing compressible hydrodynamics codes
Nelson, Eric M
2008-01-01
The displacement method for estimating shock position in a shock-capturing compressible hydrodynamics code is introduced. Common estimates use simulation data within the captured shock, but the displacement method uses data behind the shock, making the estimate consistent with and as reliable as estimates of material parameters obtained from averages or fits behind the shock. The displacement method is described in the context of a steady shock in a one-dimensional lagrangian hydrodynamics code, and demonstrated on a piston problem and a spherical blast wave.The displacement method's estimates of shock position are much better than common estimates in such applications.
Compressible Astrophysics Simulation Code
Howell, L.; Singer, M.
2007-07-18
This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.
Coded aperture compressive temporal imaging.
Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J
2013-05-01
We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.
Improvements to SOIL: An Eulerian hydrodynamics code
Davis, C.G.
1988-04-01
Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs.
Shadowfax: Moving mesh hydrodynamical integration code
NASA Astrophysics Data System (ADS)
Vandenbroucke, Bert
2016-05-01
Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).
An implicit Smooth Particle Hydrodynamic code
Charles E. Knapp
2000-04-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS
Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.
2011-10-01
We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunov scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.
Code Differentiation for Hydrodynamic Model Optimization
Henninger, R.J.; Maudlin, P.J.
1999-06-27
Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.
A new three-dimensional general-relativistic hydrodynamics code
NASA Astrophysics Data System (ADS)
Baiotti, L.; Hawke, I.; Montero, P. J.; Rezzolla, L.
We present a new three-dimensional general relativistic hydrodynamics code, the Whisky code. This code incorporates the expertise developed over the past years in the numerical solution of Einstein equations and of the hydrodynamics equations in a curved spacetime, and is the result of a collaboration of several European Institutes. We here discuss the ability of the code to carry out long-term accurate evolutions of the linear and nonlinear dynamics of isolated relativistic stars.
Superresonant instability of a compressible hydrodynamic vortex
NASA Astrophysics Data System (ADS)
Oliveira, Leandro A.; Cardoso, Vitor; Crispino, Luís C. B.
2016-06-01
We show that a purely circulating and compressible system, in an adiabatic regime of acoustic propagation, presents superresonant instabilities. To show the existence these instabilities, we compute the quasinormal mode frequencies of this system numerically using two different frequency domain methods.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. III. MULTIGROUP RADIATION HYDRODYNAMICS
Zhang, W.; Almgren, A.; Bell, J.; Howell, L.; Burrows, A.; Dolence, J.
2013-01-15
We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.
CASTRO: A New Compressible Astrophysical Solver. III. Multigroup Radiation Hydrodynamics
NASA Astrophysics Data System (ADS)
Zhang, W.; Howell, L.; Almgren, A.; Burrows, A.; Dolence, J.; Bell, J.
2013-01-01
We present a formulation for multigroup radiation hydrodynamics that is correct to order O(v/c) using the comoving-frame approach and the flux-limited diffusion approximation. We describe a numerical algorithm for solving the system, implemented in the compressible astrophysics code, CASTRO. CASTRO uses a Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. In our multigroup radiation solver, the system is split into three parts: one part that couples the radiation and fluid in a hyperbolic subsystem, another part that advects the radiation in frequency space, and a parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem and the frequency space advection are solved explicitly with high-order Godunov schemes, whereas the parabolic part is solved implicitly with a first-order backward Euler method. Our multigroup radiation solver works for both neutrino and photon radiation.
Subband Coding Methods for Seismic Data Compression
NASA Technical Reports Server (NTRS)
Kiely, A.; Pollara, F.
1995-01-01
This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. I. HYDRODYNAMICS AND SELF-GRAVITY
Almgren, A. S.; Beckner, V. E.; Bell, J. B.; Day, M. S.; Lijewski, M. J.; Nonaka, A.; Howell, L. H.; Singer, M.; Joggerst, C. C.; Zingale, M.
2010-06-01
We present a new code, CASTRO, that solves the multicomponent compressible hydrodynamic equations for astrophysical flows including self-gravity, nuclear reactions, and radiation. CASTRO uses an Eulerian grid and incorporates adaptive mesh refinement (AMR). Our approach to AMR uses a nested hierarchy of logically rectangular grids with simultaneous refinement in both space and time. The radiation component of CASTRO will be described in detail in the next paper, Part II, of this series.
Compressible Lagrangian hydrodynamics without Lagrangian cells
NASA Astrophysics Data System (ADS)
Clark, Robert A.
The partial differential Eqs [2.1, 2.2, and 2.3], along with the equation of state 2.4, which describe the time evolution of compressible fluid flow can be solved without the use of a Lagrangian mesh. The method follows embedded fluid points and uses finite difference approximations to ěc nablaP and ěc nabla · ěc u to update p, ěc u and e. We have demonstrated that the method can accurately calculate highly distorted flows without difficulty. The finite difference approximations are not unique, improvements may be found in the near future. The neighbor selection is not unique, but the one being used at present appears to do an excellent job. The method could be directly extended to three dimensions. One drawback to the method is the failure toexplicitly conserve mass, momentum and energy. In fact, at any given time, the mass is not defined. We must perform an auxiliary calculation by integrating the density field over space to obtain mass, energy and momentum. However, in all cases where we have done this, we have found the drift in these quantities to be no more than a few percent.
Image coding compression based on DCT
NASA Astrophysics Data System (ADS)
Feng, Fei; Liu, Peixue; Jiang, Baohua
2012-04-01
With the development of computer science and communications, the digital image processing develops more and more fast. High quality images are loved by people, but it will waste more stored space in our computer and it will waste more bandwidth when it is transferred by Internet. Therefore, it's necessary to have an study on technology of image compression. At present, many algorithms about image compression is applied to network and the image compression standard is established. In this dissertation, some analysis on DCT will be written. Firstly, the principle of DCT will be shown. It's necessary to realize image compression, because of the widely using about this technology; Secondly, we will have a deep understanding of DCT by the using of Matlab, the process of image compression based on DCT, and the analysis on Huffman coding; Thirdly, image compression based on DCT will be shown by using Matlab and we can have an analysis on the quality of the picture compressed. It is true that DCT is not the only algorithm to realize image compression. I am sure there will be more algorithms to make the image compressed have a high quality. I believe the technology about image compression will be widely used in the network or communications in the future.
Pulse compression using binary phase codes
NASA Technical Reports Server (NTRS)
Farley, D. T.
1983-01-01
In most MST applications pulsed radars are peak power limited and have excess average power capacity. Short pulses are required for good range resolution, but the problem of range ambiguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a technique which allows more of the transmitter average power capacity to be used without sacrificing range resolution. As the name implies, a pulse of power P and duration T is in a certain sense converted into one of power nP and duration T/n. In the frequency domain, compression involves manipulating the phases of the different frequency components of the pulse. One way to compress a pulse is via phase coding, especially binary phase coding, a technique which is particularly amenable to digital processing techniques. This method, which is used extensively in radar probing of the atmosphere and ionosphere is discussed. Barker codes, complementary and quasi-complementary code sets, and cyclic codes are addressed.
Compression of polyphase codes with Doppler shift
NASA Astrophysics Data System (ADS)
Wirth, W. D.
It is shown that pulse compression with sufficient Doppler tolerance may be achieved with polyphase codes derived from linear frequency modulation (LFM) and nonlinear frequency modulation (NLFM). Low sidelobes in range and Doppler are required especially for the radar search function. These may be achieved by an LFM derived phase coder together with Hamming weighting or by applying a PNL polyphase code derived from NLFM. For a discrete and known Doppler frequency with an expanded and mismatched reference vector a sidelobe reduction is possible. The compression is then achieved without a loss in resolution. A set up for the expanded reference gives zero sidelobes only in an interval around the signal peak or a least square minimization for all range elements. This version may be useful for target tracking.
High compression image and image sequence coding
NASA Technical Reports Server (NTRS)
Kunt, Murat
1989-01-01
The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.
AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics
NASA Astrophysics Data System (ADS)
Plewa, T.; Müller, E.
2001-08-01
Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.
Coding Strategies and Implementations of Compressive Sensing
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Han
This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or
A new hydrodynamics code for Type Ia supernovae
NASA Astrophysics Data System (ADS)
Leung, S.-C.; Chu, M.-C.; Lin, L.-M.
2015-12-01
A two-dimensional hydrodynamics code for Type Ia supernova (SNIa) simulations is presented. The code includes a fifth-order shock-capturing scheme WENO, detailed nuclear reaction network, flame-capturing scheme and sub-grid turbulence. For post-processing, we have developed a tracer particle scheme to record the thermodynamical history of the fluid elements. We also present a one-dimensional radiative transfer code for computing observational signals. The code solves the Lagrangian hydrodynamics and moment-integrated radiative transfer equations. A local ionization scheme and composition dependent opacity are included. Various verification tests are presented, including standard benchmark tests in one and two dimensions. SNIa models using the pure turbulent deflagration model and the delayed-detonation transition model are studied. The results are consistent with those in the literature. We compute the detailed chemical evolution using the tracer particles' histories, and we construct corresponding bolometric light curves from the hydrodynamics results. We also use a GPU to speed up the computation of some highly repetitive subroutines. We achieve an acceleration of 50 times for some subroutines and a factor of 6 in the global run time.
Adding kinetics and hydrodynamics to the CHEETAH thermochemical code
Fried, L.E., Howard, W.M., Souers, P.C.
1997-01-15
In FY96 we released CHEETAH 1.40, which made extensive improvements on the stability and user friendliness of the code. CHEETAH now has over 175 users in government, academia, and industry. Efforts have also been focused on adding new advanced features to CHEETAH 2.0, which is scheduled for release in FY97. We have added a new chemical kinetics capability to CHEETAH. In the past, CHEETAH assumed complete thermodynamic equilibrium and independence of time. The addition of a chemical kinetic framework will allow for modeling of time-dependent phenomena, such as partial combustion and detonation in composite explosives with large reaction zones. We have implemented a Wood-Kirkwood detonation framework in CHEETAH, which allows for the treatment of nonideal detonations and explosive failure. A second major effort in the project this year has been linking CHEETAH to hydrodynamic codes to yield an improved HE product equation of state. We have linked CHEETAH to 1- and 2-D hydrodynamic codes, and have compared the code to experimental data. 15 refs., 13 figs., 1 tab.
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
CHOLLA: A NEW MASSIVELY PARALLEL HYDRODYNAMICS CODE FOR ASTROPHYSICAL SIMULATION
Schneider, Evan E.; Robertson, Brant E.
2015-04-15
We present Computational Hydrodynamics On ParaLLel Architectures (Cholla ), a new three-dimensional hydrodynamics code that harnesses the power of graphics processing units (GPUs) to accelerate astrophysical simulations. Cholla models the Euler equations on a static mesh using state-of-the-art techniques, including the unsplit Corner Transport Upwind algorithm, a variety of exact and approximate Riemann solvers, and multiple spatial reconstruction techniques including the piecewise parabolic method (PPM). Using GPUs, Cholla evolves the fluid properties of thousands of cells simultaneously and can update over 10 million cells per GPU-second while using an exact Riemann solver and PPM reconstruction. Owing to the massively parallel architecture of GPUs and the design of the Cholla code, astrophysical simulations with physically interesting grid resolutions (≳256{sup 3}) can easily be computed on a single device. We use the Message Passing Interface library to extend calculations onto multiple devices and demonstrate nearly ideal scaling beyond 64 GPUs. A suite of test problems highlights the physical accuracy of our modeling and provides a useful comparison to other codes. We then use Cholla to simulate the interaction of a shock wave with a gas cloud in the interstellar medium, showing that the evolution of the cloud is highly dependent on its density structure. We reconcile the computed mixing time of a turbulent cloud with a realistic density distribution destroyed by a strong shock with the existing analytic theory for spherical cloud destruction by describing the system in terms of its median gas density.
External-Compression Supersonic Inlet Design Code
NASA Technical Reports Server (NTRS)
Slater, John W.
2011-01-01
A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
A cosmological hydrodynamic code based on the piecewise parabolic method
NASA Astrophysics Data System (ADS)
Gheller, Claudio; Pantano, Ornella; Moscardini, Lauro
1998-04-01
We present a hydrodynamical code for cosmological simulations which uses the piecewise parabolic method (PPM) to follow the dynamics of the gas component and an N-body particle-mesh algorithm for the evolution of the collisionless component. The gravitational interaction between the two components is regulated by the Poisson equation, which is solved by a standard fast Fourier transform (FFT) procedure. In order to simulate cosmological flows, we have introduced several modifications to the original PPM scheme, which we describe in detail. Various tests of the code are presented, including adiabatic expansion, single and multiple pancake formation, and three-dimensional cosmological simulations with initial conditions based on the cold dark matter scenario.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
CASTRO: Multi-dimensional Eulerian AMR Radiation-hydrodynamics Code
NASA Astrophysics Data System (ADS)
CenterComputational Sciences; Engineering (Berkeley); Howell, Louis; Singer, Mike
2011-05-01
CASTRO is a multi-dimensional Eulerian AMR radiation-hydrodynamics code that includes stellar equations of state, nuclear reaction networks, and self-gravity. Initial target applications for CASTRO include Type Ia and Type II supernovae. CASTRO supports calculations in 1-d, 2-d and 3-d Cartesian coordinates, as well as 1-d spherical and 2-d cylindrical (r-z) coordinate systems. Time integration of the hydrodynamics equations is based on an unsplit version of the the piecewise parabolic method (PPM) with new limiters that avoid reducing the accuracy of the scheme at smooth extrema. CASTRO can follow an arbitrary number of isotopes or elements. The atomic weights and amounts of these elements are used to calculate the mean molecular weight of the gas required by the equation of state. CASTRO supports several different approaches to solving for self-gravity. The most general is a full Poisson solve for the gravitational potential. CASTRO also supports a monopole approximation for gravity, and a constant gravity option is also available. The CASTRO software is written in C++ and Fortran, and is based on the BoxLib software framework developed by CCSE.
Bit-wise arithmetic coding for data compression
NASA Technical Reports Server (NTRS)
Kiely, A. B.
1994-01-01
This article examines the problem of compressing a uniformly quantized independent and identically distributed (IID) source. We present a new compression technique, bit-wise arithmetic coding, that assigns fixed-length codewords to the quantizer output and uses arithmetic coding to compress the codewords, treating the codeword bits as independent. We examine the performance of this method and evaluate the overhead required when used block-adaptively. Simulation results are presented for Gaussian and Laplacian sources. This new technique could be used as the entropy coder in a transform or subband coding system.
Radiative Transport for a Smoothed Particle Hydrodynamic Code
NASA Astrophysics Data System (ADS)
Lang, Bernd; Kessel-Deynet, Olaf; Burkert, Andreas
One crude approximation to describe the effect of Radiative Transport in SPH simulations is to introduce a density dependent polytropic index in the equation of state (Matthew R. Bate 1998), which is larger than one if the medium becomes optically thick. By doing this one fixes the system to a special density-temperature dependence. But in principle the system should have the possibility to realize a variety of different density-temperature dependencies if radiative transport is involved and arbitrary heating and cooling functions can be used. We combine the advantages of the SPH Code with an algorithm describing a flux limited diffusive radiative transport to develop a RHD-Code. Flux limited diffusion involves the Rosseland-means of the absorption and scattering coefficients. To calculate this coefficients we use the model from Preibisch et al. 1993. This will restrict our simulations to low temperatures (T <= 1000 K) and high densities (ρ >= 103 cm-3) but on the other hand keeps the code as simple and as fast as possible. For a given energy-density distribution, the radiation field evolves towards the equilibrium solution on a time-scale much smaller than the typical dynamical time-step for the hydrodynamic equations. So the RT equations have to be solved implicit. To do this we use the nice convergence features of the Successive Over-Relaxing (SOR) method. The focus of the simulations than will be on the prestellar phase where molecular cloud cores become optically thick. The central temperature is still low (T = 10 dots 500 K) and thus the ionization and dissociation degree is low and nearly constant.
New Methods for Lossless Image Compression Using Arithmetic Coding.
ERIC Educational Resources Information Center
Howard, Paul G.; Vitter, Jeffrey Scott
1992-01-01
Identifies four components of a good predictive lossless image compression method: (1) pixel sequence, (2) image modeling and prediction, (3) error modeling, and (4) error coding. Highlights include Laplace distribution and a comparison of the multilevel progressive method for image coding with the prediction by partial precision matching method.…
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1976-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.
Streamlined Genome Sequence Compression using Distributed Source Coding
Wang, Shuang; Jiang, Xiaoqian; Chen, Feng; Cui, Lijuan; Cheng, Samuel
2014-01-01
We aim at developing a streamlined genome sequence compression algorithm to support alternative miniaturized sequencing devices, which have limited communication, storage, and computation power. Existing techniques that require heavy client (encoder side) cannot be applied. To tackle this challenge, we carefully examined distributed source coding theory and developed a customized reference-based genome compression protocol to meet the low-complexity need at the client side. Based on the variation between source and reference, our protocol will pick adaptively either syndrome coding or hash coding to compress subsequences of changing code length. Our experimental results showed promising performance of the proposed method when compared with the state-of-the-art algorithm (GRS). PMID:25520552
Modeling Fluid Instabilities in Inertial Confinement Fusion Hydrodynamics Codes
NASA Astrophysics Data System (ADS)
Zalesak, Steven
2004-11-01
When attempting to numerically model a physical phenomenon of any kind, we typically formulate the numerical requirements in terms of the range of spatial and temporal scales of interest. We then construct numerical software that adequately resolves those scales in each of the spatial and temporal dimensions. This software may use adaptive mesh refinement or other techniques to adequately resolve those scales of interest, and may use front-capturing algorithms or other techniques to avoid having to resolve scales that are not of interest to us. Knowing what constitutes the scales of interest is sometimes a difficult question. Harder still is knowing what constitutes adequate resolution. For many physical phenomena, adequate resolution may be obtained, for example, by simply demanding that the spatial and temporal derivatives of all scales of interest have errors less than some specified tolerance. But for other phenomena, in particular those in which physical instabilities are active, one must be much more precise in the specification of adequate resolution. In such situations one must ask detailed questions about the nature of the numerical errors, not just their size. The problem we have in mind is that of accurately modeling the evolution of small amplitude perturbations to a time-dependent flow, where the unperturbed flow itself exhibits large amplitude temporal and spatial variations. Any errors that we make in numerically modeling the unperturbed flow, if they have a projection onto the space of the perturbations of interest, can easily compromise the accuracy of those perturbations, even if the errors are small in terms of the unperturbed solution. Here we will discuss the progress that we have made over the past year in attempting to improve the ability of our radiation hydrodynamics code FASTRAD3D to accurately model the evolution of small-amplitude perturbations to an imploding ICF pellet, which is subject to both Richtmyer-Meshkov and Rayleigh
Sparsity and Compressed Coding in Sensory Systems
Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David
2014-01-01
Considering that many natural stimuli are sparse, can a sensory system evolve to take advantage of this sparsity? We explore this question and show that significant downstream reductions in the numbers of neurons transmitting stimuli observed in early sensory pathways might be a consequence of this sparsity. First, we model an early sensory pathway using an idealized neuronal network comprised of receptors and downstream sensory neurons. Then, by revealing a linear structure intrinsic to neuronal network dynamics, our work points to a potential mechanism for transmitting sparse stimuli, related to compressed-sensing (CS) type data acquisition. Through simulation, we examine the characteristics of networks that are optimal in sparsity encoding, and the impact of localized receptive fields beyond conventional CS theory. The results of this work suggest a new network framework of signal sparsity, freeing the notion from any dependence on specific component-space representations. We expect our CS network mechanism to provide guidance for studying sparse stimulus transmission along realistic sensory pathways as well as engineering network designs that utilize sparsity encoding. PMID:25144745
Vitruk, S.G.; Korsun, A.S.; Ushakov, P.A.
1995-09-01
The multilevel mathematical model of neutron thermal hydrodynamic processes in a passive safety core without assemblies duct walls and appropriate computer code SKETCH, consisted of thermal hydrodynamic module THEHYCO-3DT and neutron one, are described. A new effective discretization technique for energy, momentum and mass conservation equations is applied in hexagonal - z geometry. The model adequacy and applicability are presented. The results of the calculations show that the model and the computer code could be used in conceptual design of advanced reactors.
GPUPEGAS: A NEW GPU-ACCELERATED HYDRODYNAMIC CODE FOR NUMERICAL SIMULATIONS OF INTERACTING GALAXIES
Kulikov, Igor
2014-09-01
In this paper, a new scalable hydrodynamic code, GPUPEGAS (GPU-accelerated Performance Gas Astrophysical Simulation), for the simulation of interacting galaxies is proposed. The details of a parallel numerical method co-design are described. A speed-up of 55 times was obtained within a single GPU accelerator. The use of 60 GPU accelerators resulted in 96% parallel efficiency. A collisionless hydrodynamic approach has been used for modeling of stars and dark matter. The scalability of the GPUPEGAS code is shown.
Development of 1D Liner Compression Code for IDL
NASA Astrophysics Data System (ADS)
Shimazu, Akihisa; Slough, John; Pancotti, Anthony
2015-11-01
A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.
Compressed data organization for high throughput parallel entropy coding
NASA Astrophysics Data System (ADS)
Said, Amir; Mahfoodh, Abo-Talib; Yea, Sehoon
2015-09-01
The difficulty of parallelizing entropy coding is increasingly limiting the data throughputs achievable in media compression. In this work we analyze what are the fundamental limitations, using finite-state-machine models for identifying the best manner of separating tasks that can be processed independently, while minimizing compression losses. This analysis confirms previous works showing that effective parallelization is feasible only if the compressed data is organized in a proper way, which is quite different from conventional formats. The proposed new formats exploit the fact that optimal compression is not affected by the arrangement of coded bits, but it goes further in exploiting the decreasing cost of data processing and memory. Additional advantages include the ability to use, within this framework, increasingly more complex data modeling techniques, and the freedom to mix different types of coding. We confirm the parallelization effectiveness using coding simulations that run on multi-core processors, and show how throughput scales with the number of cores, and analyze the additional bit-rate overhead.
A Two-Dimensional Compressible Gas Flow Code
1995-03-17
F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicabilitymore » of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.« less
NASA Technical Reports Server (NTRS)
Rost, Martin C.; Sayood, Khalid
1991-01-01
A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.
Image compression using a novel edge-based coding algorithm
NASA Astrophysics Data System (ADS)
Keissarian, Farhad; Daemi, Mohammad F.
2001-08-01
In this paper, we present a novel edge-based coding algorithm for image compression. The proposed coding scheme is the predictive version of the original algorithm, which we presented earlier in literature. In the original version, an image is block coded according to the level of visual activity of individual blocks, following a novel edge-oriented classification stage. Each block is then represented by a set of parameters associated with the pattern appearing inside the block. The use of these parameters at the receiver reduces the cost of reconstruction significantly. In the present study, we extend and improve the performance of the existing technique by exploiting the expected spatial redundancy across the neighboring blocks. Satisfactory coded images at competitive bit rate with other block-based coding techniques have been obtained.
A seismic data compression system using subband coding
NASA Technical Reports Server (NTRS)
Kiely, A. B.; Pollara, F.
1995-01-01
This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.
NASA Astrophysics Data System (ADS)
Sandalski, Stou
Smooth particle hydrodynamics is an efficient method for modeling the dynamics of fluids. It is commonly used to simulate astrophysical processes such as binary mergers. We present a newly developed GPU accelerated smooth particle hydrodynamics code for astrophysical simulations. The code is named
Quantum error correcting codes from the compression formalism
NASA Astrophysics Data System (ADS)
Choi, Man-Duen; Kribs, David W.; Życzkowski, Karol
2006-08-01
We solve the fundamental quantum error correction problem for bi-unitary channels on two-qubit Hilbert space. By solving an algebraic compression problem, we construct qubit codes for such channels on arbitrary dimension Hilbert space, and identify correctable codes for Pauli-error models not obtained by the stabilizer formalism. This is accomplished through an application of a new tool for error correction in quantum computing called the "higher-rank numerical range". We describe its basic properties and discuss possible further applications.
CoCoNuT: General relativistic hydrodynamics code with dynamical space-time evolution
NASA Astrophysics Data System (ADS)
Dimmelmeier, Harald; Novak, Jérôme; Cerdá-Durán, Pablo
2012-02-01
CoCoNuT is a general relativistic hydrodynamics code with dynamical space-time evolution. The main aim of this numerical code is the study of several astrophysical scenarios in which general relativity can play an important role, namely the collapse of rapidly rotating stellar cores and the evolution of isolated neutron stars. The code has two flavors: CoCoA, the axisymmetric (2D) magnetized version, and CoCoNuT, the 3D non-magnetized version.
A compressible Navier-Stokes code for turbulent flow modeling
NASA Technical Reports Server (NTRS)
Coakley, T. J.
1984-01-01
An implicit, finite volume code for solving two dimensional, compressible turbulent flows is described. Second order upwind differencing of the inviscid terms of the equations is used to enhance stability and accuracy. A diagonal form of the implicit algorithm is used to improve efficiency. Several zero and two equation turbulence models are incorporated to study their impact on overall flow modeling accuracy. Applications to external and internal flows are discussed.
Introduction and guide to LLNL's relativistic 3-D nuclear hydrodynamics code
Zingman, J.A.; McAbee, T.L.; Alonso, C.T.; Wilson, J.R.
1987-11-01
We have constructed a relativistic hydrodynamic model to investigate Bevalac and higher energy, heavy-ion collisions. The basis of the model is a finite-difference solution to covariant hydrodynamics, which will be described in the rest of this paper. This paper also contains: a brief review of the equations and numerical methods we have employed in the solution to the hydrodynamic equations, a detailed description of several of the most important subroutines, and a numerical test on the code. 30 refs., 8 figs., 1 tab.
NASA Astrophysics Data System (ADS)
Lind, S. J.; Stansby, P. K.; Rogers, B. D.
2016-03-01
A new two-phase incompressible-compressible Smoothed Particle Hydrodynamics (SPH) method has been developed where the interface is discontinuous in density. This is applied to water-air problems with a large density difference. The incompressible phase requires surface pressure from the compressible phase and the compressible phase requires surface velocity from the incompressible phase. Compressible SPH is used for the air phase (with the isothermal stiffened ideal gas equation of state for low Mach numbers) and divergence-free (projection based) incompressible SPH is used for the water phase, with the addition of Fickian shifting to produce sufficiently homogeneous particle distributions to enable stable, accurate, converged solutions without noise in the pressure field. Shifting is a purely numerical particle regularisation device. The interface remains a true material discontinuity at a high density ratio with continuous pressure and velocity at the interface. This approach with the physics of compressibility and incompressibility represented is novel within SPH and is validated against semi-analytical results for a two-phase elongating and oscillating water drop, analytical results for low amplitude inviscid standing waves, the Kelvin-Helmholtz instability, and a dam break problem with high interface distortion and impact on a vertical wall where experimental and other numerical results are available.
NASA Astrophysics Data System (ADS)
den, M.; Yamashita, K.; Ogawa, T.
A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.
Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities
Edwards, J M; Robey, H; Mackinnon, A
2001-06-29
Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics.
Hydrodynamics of rotating stars and close binary interactions: Compressible ellipsoid models
NASA Technical Reports Server (NTRS)
Lai, Dong; Rasio, Frederic A.; Shapiro, Stuart L.
1994-01-01
We develop a new formalism to study the dynamics of fluid polytropes in three dimensions. The stars are modeled as compressible ellipsoids, and the hydrodynamic equations are reduced to a set of ordinary differential equations for the evolution of the principal axes and other global quantities. Both viscous dissipation and the gravitational radiation reaction are incorporated. We establish the validity of our approximations and demonstrate the simplicity and power of the method by rederiving a number of known results concerning the stability and dynamical oscillations of rapidly rotating polytropes. In particular, we present a generalization to compressible fluids of Chandrasekhar's classical results for the secular and dynamical instabilities of incompressible Maclaurin spheroids. We also present several applications of our method to astrophysical problems of great current interest, such as the tidal disruption of a star by a massive black hole, the coalescence of compact binaries driven by the emission of gravitational waves, and the development of instabilities in close binary systems.
Block-based conditional entropy coding for medical image compression
NASA Astrophysics Data System (ADS)
Bharath Kumar, Sriperumbudur V.; Nagaraj, Nithin; Mukhopadhyay, Sudipta; Xu, Xiaofeng
2003-05-01
In this paper, we propose a block-based conditional entropy coding scheme for medical image compression using the 2-D integer Haar wavelet transform. The main motivation to pursue conditional entropy coding is that the first-order conditional entropy is always theoretically lesser than the first and second-order entropies. We propose a sub-optimal scan order and an optimum block size to perform conditional entropy coding for various modalities. We also propose that a similar scheme can be used to obtain a sub-optimal scan order and an optimum block size for other wavelets. The proposed approach is motivated by a desire to perform better than JPEG2000 in terms of compression ratio. We hint towards developing a block-based conditional entropy coder, which has the potential to perform better than JPEG2000. Though we don't indicate a method to achieve the first-order conditional entropy coder, the use of conditional adaptive arithmetic coder would achieve arbitrarily close to the theoretical conditional entropy. All the results in this paper are based on the medical image data set of various bit-depths and various modalities.
Code aperture optimization for spectrally agile compressive imaging.
Arguello, Henry; Arce, Gonzalo R
2011-11-01
Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.
Hydrodynamic compression of young and adult rat osteoblast-like cells on titanium fiber mesh.
Walboomers, X F; Elder, S E; Bumgardner, J D; Jansen, J A
2006-01-01
Living bone cells are responsive to mechanical loading. Consequently, numerous in vitro models have been developed to examine the application of loading to cells. However, not all systems are suitable for the fibrous and porous three-dimensional materials, which are preferable for tissue repair purposes, or for the production of tissue engineering scaffolds. For three-dimensional applications, mechanical loading of cells with either fluid flow systems or hydrodynamic pressure systems has to be considered. Here, we aimed to evaluate the response of osteoblast-like cells to hydrodynamic compression, while growing in a three-dimensional titanium fiber mesh scaffolding material. For this purpose, a custom hydrodynamic compression chamber was built. Bone marrow cells were obtained from the femora of young (12-day-old) or old (1-year-old) rats, and precultured in the presence of dexamethasone and beta-glycerophosphate to achieve an osteoblast-like phenotype. Subsequently, cells were seeded onto the titanium mesh scaffolds, and subjected to hydrodynamic pressure, alternating between 0.3 to 5.0 MPa at 1 Hz, at 15-min intervals for a total of 60 min per day for up to 3 days. After pressurization, cell viability was checked. Afterward, DNA levels, alkaline phosphatase (ALP) activity, and extracellular calcium content were measured. Finally, all specimens were observed with scanning electron microscopy. Cell viability studies showed that the applied pressure was not harmful to the cells. Furthermore, we found that cells were able to detect the compression forces, because we did see evident effects on the cell numbers of the cells derived from old animals. However, there were no other changes in the cells under pressure. Finally, it was also noticeable that cells from old animals did not express ALP activity, but did show similar calcified extracellular matrix formation to the cells from young animals. In conclusion, the difference in DNA levels as reaction toward pressure
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
NASA Astrophysics Data System (ADS)
Wang, Junfeng; Liang, Chunlei; Miesch, Mark S.
2015-06-01
We present a novel and powerful Compressible High-ORder Unstructured Spectral-difference (CHORUS) code for simulating thermal convection and related fluid dynamics in the interiors of stars and planets. The computational geometries are treated as rotating spherical shells filled with stratified gas. The hydrodynamic equations are discretized by a robust and efficient high-order Spectral Difference Method (SDM) on unstructured meshes. The computational stencil of the spectral difference method is compact and advantageous for parallel processing. CHORUS demonstrates excellent parallel performance for all test cases reported in this paper, scaling up to 12 000 cores on the Yellowstone High-Performance Computing cluster at NCAR. The code is verified by defining two benchmark cases for global convection in Jupiter and the Sun. CHORUS results are compared with results from the ASH code and good agreement is found. The CHORUS code creates new opportunities for simulating such varied phenomena as multi-scale solar convection, core convection, and convection in rapidly-rotating, oblate stars.
Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics
NASA Astrophysics Data System (ADS)
Takabe, Hideaki
2016-10-01
This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (ID & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.
Magneto-hydrodynamic calculation of magnetic flux compression using imploding cylindrical liners
NASA Astrophysics Data System (ADS)
Zhao, Jibo; Sun, Chengwei; Gu, Zhuowei
2015-06-01
Based on the one-dimensional elastic-plastic reactive hydrodynamic code SSS, the one-dimensional magneto-hydrodynamics code SSS/MHD is developed successfully, and calculation is carried for cylindrical magneto cumulative generators (MC-1 device). The magnetic field diffusion into liner and sample tuber is analyzed, and the result shows that the maximum value of magnetic induction intensity to cavity 0.2 mm in liner is only sixteen Tesla, while the one in sample tuber is several hundred Tesla, which is caused by balancing of electromagnetism force and imploding one for the different velocity of liner and sample tuber. The curves of magnetic induction intensity on axes of cavity and the velocity history on the wall of sample tuber are calculated, which accord with the experiment results. The works in this paper account for that code SSS/MHD can be applied in experiment configures of detonation, shock and electromagnetism load and improve of parameter successfully. The experiment data can be estimated, analyzed and checked validly, and the physics course of correlative device can be understood deeply, according to SSS/MHD. This work was supported by the special funds of the National Natural Science Foundation of China under Grant 11176002.
NASA Astrophysics Data System (ADS)
Puchwein, Ewald; Baldi, Marco; Springel, Volker
2013-11-01
We present a new massively parallel code for N-body and cosmological hydrodynamical simulations of modified gravity models. The code employs a multigrid-accelerated Newton-Gauss-Seidel relaxation solver on an adaptive mesh to efficiently solve for perturbations in the scalar degree of freedom of the modified gravity model. As this new algorithm is implemented as a module for the P-GADGET3 code, it can at the same time follow the baryonic physics included in P-GADGET3, such as hydrodynamics, radiative cooling and star formation. We demonstrate that the code works reliably by applying it to simple test problems that can be solved analytically, as well as by comparing cosmological simulations to results from the literature. Using the new code, we perform the first non-radiative and radiative cosmological hydrodynamical simulations of an f (R)-gravity model. We also discuss the impact of active galactic nucleus feedback on the matter power spectrum, as well as degeneracies between the influence of baryonic processes and modifications of gravity.
PEGAS: Hydrodynamical code for numerical simulation of the gas components of interacting galaxies
NASA Astrophysics Data System (ADS)
Kulikov, Igor
A new hydrodynamical code for numerical simulation of the gravitational gas dynamics is described in the paper. The code is based on the Fluid-in-Cell method with a Godunov-type scheme at the Eulerian stage. The numerical method was adapted for GPU-based supercomputers. The performance of the code is shown by the simulation of the collision of the gas components of two similar disc galaxies in the course of the central collision of the galaxies in the polar direction.
NASA Astrophysics Data System (ADS)
Merlin, E.; Buonomo, U.; Grassi, T.; Piovan, L.; Chiosi, C.
2010-04-01
Context. We present the new release of the Padova N-body code for cosmological simulations of galaxy formation and evolution, EvoL. The basic Tree + SPH code is presented and analysed, together with an overview of the software architectures. Aims: EvoL is a flexible parallel Fortran95 code, specifically designed for simulations of cosmological structure formations on cluster, galactic and sub-galactic scales. Methods: EvoL is a fully Lagrangian self-adaptive code, based on the classical oct-tree by Barnes & Hut (1986, Nature, 324, 446) and on the smoothed particle hydrodynamics algorithm (SPH, Lucy 1977, AJ, 82, 1013). It includes special features like adaptive softening lengths with correcting extra-terms, and modern formulations of SPH and artificial viscosity. It is designed to be run in parallel on multiple CPUs to optimise the performance and save computational time. Results: We describe the code in detail, and present the results of a number of standard hydrodynamical tests.
NASA Astrophysics Data System (ADS)
Movahed, Pooya
High-speed flows are prone to hydrodynamic interfacial instabilities that evolve to turbulence, thereby intensely mixing different fluids and dissipating energy. The lack of knowledge of these phenomena has impeded progress in a variety of disciplines. In science, a full understanding of mixing between heavy and light elements after the collapse of a supernova and between adjacent layers of different density in geophysical (atmospheric and oceanic) flows remains lacking. In engineering, the inability to achieve ignition in inertial fusion and efficient combustion constitute further examples of this lack of basic understanding of turbulent mixing. In this work, my goal is to develop accurate and efficient numerical schemes and employ them to study compressible turbulence and mixing generated by interactions between shocked (Richtmyer-Meshkov) and accelerated (Rayleigh-Taylor) interfaces, which play important roles in high-energy-density physics environments. To accomplish my goal, a hybrid high-order central/discontinuity-capturing finite difference scheme is first presented. The underlying principle is that, to accurately and efficiently represent both broadband motions and discontinuities, non-dissipative methods are used where the solution is smooth, while the more expensive and dissipative capturing schemes are applied near discontinuous regions. Thus, an accurate numerical sensor is developed to discriminate between smooth regions, shocks and material discontinuities, which all require a different treatment. The interface capturing approach is extended to central differences, such that smooth distributions of varying specific heats ratio can be simulated without generating spurious pressure oscillations. I verified and validated this approach against a stringent suite of problems including shocks, interfaces, turbulence and two-dimensional single-mode Richtmyer-Meshkov instability simulations. The three-dimensional code is shown to scale well up to 4000 cores
A CMOS Imager with Focal Plane Compression using Predictive Coding
NASA Technical Reports Server (NTRS)
Leon-Salas, Walter D.; Balkir, Sina; Sayood, Khalid; Schemm, Nathan; Hoffman, Michael W.
2007-01-01
This paper presents a CMOS image sensor with focal-plane compression. The design has a column-level architecture and it is based on predictive coding techniques for image decorrelation. The prediction operations are performed in the analog domain to avoid quantization noise and to decrease the area complexity of the circuit, The prediction residuals are quantized and encoded by a joint quantizer/coder circuit. To save area resources, the joint quantizerlcoder circuit exploits common circuitry between a single-slope analog-to-digital converter (ADC) and a Golomb-Rice entropy coder. This combination of ADC and encoder allows the integration of the entropy coder at the column level. A prototype chip was fabricated in a 0.35 pm CMOS process. The output of the chip is a compressed bit stream. The test chip occupies a silicon area of 2.60 mm x 5.96 mm which includes an 80 X 44 APS array. Tests of the fabricated chip demonstrate the validity of the design.
NASA Technical Reports Server (NTRS)
Libersky, Larry; Allahdadi, Firooz A.; Carney, Theodore C.
1992-01-01
Analysis of interaction occurring between space debris and orbiting structures is of great interest to the planning and survivability of space assets. Computer simulation of the impact events using hydrodynamic codes can provide some understanding of the processes but the problems involved with this fundamental approach are formidable. First, any realistic simulation is necessarily three-dimensional, e.g., the impact and breakup of a satellite. Second, the thickness of important components such as satellite skins or bumper shields are small with respect to the dimension of the structure as a whole, presenting severe zoning problems for codes. Thirdly, the debris cloud produced by the primary impact will yield many secondary impacts which will contribute to the damage and possible breakup of the structure. The problem was approached by choosing a relatively new computational technique that has virtues peculiar to space impacts. The method is called Smoothed Particle Hydrodynamics.
Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code
Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I
1998-12-28
An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.
Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors
Sale, D.; Jonkman, J.; Musial, W.
2009-08-01
This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.
A 3+1 dimensional viscous hydrodynamic code for relativistic heavy ion collisions
NASA Astrophysics Data System (ADS)
Karpenko, Iu.; Huovinen, P.; Bleicher, M.
2014-11-01
We describe the details of 3+1 dimensional relativistic hydrodynamic code for the simulations of quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. The code solves the equations of relativistic viscous hydrodynamics in the Israel-Stewart framework. With the help of ideal-viscous splitting, we keep the ability to solve the equations of ideal hydrodynamics in the limit of zero viscosities using a Godunov-type algorithm. Milne coordinates are used to treat the predominant expansion in longitudinal (beam) direction effectively. The results are successfully tested against known analytical relativistic inviscid and viscous solutions, as well as against existing 2+1D relativistic viscous code. Catalogue identifier: AETZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 13 825 No. of bytes in distributed program, including test data, etc.: 92 750 Distribution format: tar.gz Programming language: C++. Computer: any with a C++ compiler and the CERN ROOT libraries. Operating system: tested on GNU/Linux Ubuntu 12.04 x64 (gcc 4.6.3), GNU/Linux Ubuntu 13.10 (gcc 4.8.2), Red Hat Linux 6 (gcc 4.4.7). RAM: scales with the number of cells in hydrodynamic grid; 1900 Mbytes for 3D 160×160×100 grid. Classification: 1.5, 4.3, 12. External routines: CERN ROOT (http://root.cern.ch), Gnuplot (http://www.gnuplot.info/) for plotting the results. Nature of problem: relativistic hydrodynamical description of the 3-dimensional quark-gluon/hadron matter expansion in ultra-relativistic heavy ion collisions. Solution method: finite volume Godunov-type method. Running time: scales with the number of hydrodynamic cells; typical running times on Intel(R) Core(TM) i7-3770 CPU @ 3.40 GHz, single thread mode, 160
Bond, J.W.
1988-01-01
Data-compression codes offer the possibility of improving the thruput of existing communication systems in the near term. This study was undertaken to determine if data-compression codes could be utilized to provide message compression in a channel with up to a 0.10-bit error rate. The data-compression capabilities of codes were investigated by estimating the average number of bits-per-character required to transmit narrative files. The performance of the codes in a channel with errors (a noisy channel) was investigated in terms of the average numbers of characters-decoded-in-error and of characters-printed-in-error-per-bit-error. Results were obtained by encoding four narrative files, which were resident on an IBM-PC and use a 58-character set. The study focused on Huffman codes and suffix/prefix comma-free codes. Other data-compression codes, in particular, block codes and some simple variants of block codes, are briefly discussed to place the study results in context. Comma-free codes were found to have the most-promising data compression because error propagation due to bit errors are limited to a few characters for these codes. A technique was found to identify a suffix/prefix comma-free code giving nearly the same data compressions as a Huffman code with much less error propagation than the Huffman codes. Greater data compression can be achieved through the use of this comma-free code word assignments based on conditioned probabilities of character occurrence.
Design and Analysis of Fast Text Compression Based on Quasi-Arithmetic Coding.
ERIC Educational Resources Information Center
Howard, Paul G; Vitter, Jeffrey Scott
1994-01-01
Describes a detailed algorithm for fast text compression. Related to the PPM (prediction by partial matching) method, it simplifies the modeling phase by eliminating the escape mechanism and speeds up coding by using a combination of quasi-arithmetic coding and Rice coding. Details of the use of quasi-arithmetic code tables are given, and their…
A smooth particle hydrodynamics code to model collisions between solid, self-gravitating objects
NASA Astrophysics Data System (ADS)
Schäfer, C.; Riecker, S.; Maindl, T. I.; Speith, R.; Scherrer, S.; Kley, W.
2016-05-01
Context. Modern graphics processing units (GPUs) lead to a major increase in the performance of the computation of astrophysical simulations. Owing to the different nature of GPU architecture compared to traditional central processing units (CPUs) such as x86 architecture, existing numerical codes cannot be easily migrated to run on GPU. Here, we present a new implementation of the numerical method smooth particle hydrodynamics (SPH) using CUDA and the first astrophysical application of the new code: the collision between Ceres-sized objects. Aims: The new code allows for a tremendous increase in speed of astrophysical simulations with SPH and self-gravity at low costs for new hardware. Methods: We have implemented the SPH equations to model gas, liquids and elastic, and plastic solid bodies and added a fragmentation model for brittle materials. Self-gravity may be optionally included in the simulations and is treated by the use of a Barnes-Hut tree. Results: We find an impressive performance gain using NVIDIA consumer devices compared to our existing OpenMP code. The new code is freely available to the community upon request. If you are interested in our CUDA SPH code miluphCUDA, please write an email to Christoph Schäfer. miluphCUDA is the CUDA port of miluph. miluph is pronounced [maßl2v]. We do not support the use of the code for military purposes.
Estabrook, K; Farley, D; Glendinning, S G; Remington, B A; Stone, J; Turner, N
1999-09-22
Recent shock tube experiments using the Nova laser facility have demonstrated that strong shocks and highly supersonic flows similar to those encountered in astrophysical jets can be studied in detail through carefully controlled experiment. We propose the use of high power lasers such as Nova, Omega, and NIF to perform experiments on radiation hydrodynamic problems such as jets involving the multidimensional dynamics of strong shocks. High power lasers are the only experimental facilities that can reach the very high Mach number regime. The experiments will serve both as diagnostics of astrophysically interesting gas dynamic problems, and could also form the basis of test problems for numerical algorithms for astrophysical radiation hydrodynamic codes, The potential for experimentally achieving a strongly radiative jet seems very good.
Investigating the Magnetorotational Instability with Dedalus, and Open-Souce Hydrodynamics Code
Burns, Keaton J; /UC, Berkeley, aff SLAC
2012-08-31
The magnetorotational instability is a fluid instability that causes the onset of turbulence in discs with poloidal magnetic fields. It is believed to be an important mechanism in the physics of accretion discs, namely in its ability to transport angular momentum outward. A similar instability arising in systems with a helical magnetic field may be easier to produce in laboratory experiments using liquid sodium, but the applicability of this phenomenon to astrophysical discs is unclear. To explore and compare the properties of these standard and helical magnetorotational instabilities (MRI and HRMI, respectively), magnetohydrodynamic (MHD) capabilities were added to Dedalus, an open-source hydrodynamics simulator. Dedalus is a Python-based pseudospectral code that uses external libraries and parallelization with the goal of achieving speeds competitive with codes implemented in lower-level languages. This paper will outline the MHD equations as implemented in Dedalus, the steps taken to improve the performance of the code, and the status of MRI investigations using Dedalus.
Simulation of a ceramic impact experiment using the SPHINX smooth particle hydrodynamics code
Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.
1996-08-01
We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPHINX. We describe a new brittle fracture model that we have implemented into SPHINX, and we discuss how the model differs from others. To illustrate the code`s current capability, we simulate an experiment in which a tungsten rod strikes a target of heavily confined ceramic. Simulations in 3D at relatively coarse resolution yield poor results. However, 2D plane-strain approximations to the test produce crack patterns that are strikingly similar to the data, although the fracture model needs further refinement to match some of the finer details. We conclude with an outline of plans for continuing research and development.
Developing a weakly compressible smoothed particle hydrodynamics model for biological flows
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2014-11-01
Smoothed Particle Hydrodynamics (SPH) is a meshless particle method originally developed for astrophysics applications in 1977. Over the years, limitations of the original formulations have been addressed by different groups to extend the domain of SPH application. In biologically relevant internal flows, two of the several challenges still facing SPH are 1) treatment of inlet, outlet, and no slip boundary conditions and 2) treatment of second derivatives present in the viscous terms. In this work, we develop a 2D weakly compressible SPH (WCSPH) for simulating viscous internal flows which incorporates some of the recent advancements made by groups in the above two areas. The method is validated against several analytical and experimental benchmark solutions for both steady and unsteady laminar flows. In particular, the 2013 U.S. Food and Drug Administration benchmark test case for medical devices - steady forward flow through a nozzle with a sudden contraction and conical diffuser - is simulated for different Reynolds numbers in the laminar region and results are validated against the published experimental and CFD datasets. Support from the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) is gratefully acknowledged.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E. E. (Inventor)
1976-01-01
A space communication system incorporating a concatenated Reed Solomon Viterbi coding channel is discussed for transmitting compressed and uncompressed data from a spacecraft to a data processing center on Earth. Imaging (and other) data are first compressed into source blocks which are then coded by a Reed Solomon coder and interleaver, followed by a convolutional encoder. The received data is first decoded by a Viterbi decoder, followed by a Reed Solomon decoder and deinterleaver. The output of the latter is then decompressed, based on the compression criteria used in compressing the data in the spacecraft. The decompressed data is processed to reconstruct an approximation of the original data-producing condition or images.
Mandell, D.A.; Wingate, C.A.
1994-08-01
The design of many military devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics, that are used in armor packages; glass that is used in truck and jeep windshields and in helicopters; and rock and concrete that are used in underground bunkers. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass, and data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, the authors did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.
Mandell, D.A.; Wingate, C.A.; Stellingwwerf, R.F.
1995-12-31
The design of many devices involves numerical predictions of the material strength and fracture of brittle materials. The materials of interest include ceramics that are used in armor packages; glass that is used in windshields; and rock and concrete that are used in oil wells. As part of a program to develop advanced hydrocode design tools, the authors have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. The authors have evaluated this model and the code by predicting data from tungsten rods impacting glass. Since fractured glass properties, which are needed in the model, are not available, they did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.
Comparison among five hydrodynamic codes with a diverging-converging nozzle experiment
L. E. Thode; M. C. Cline; B. G. DeVolder; M. S. Sahota; D. K. Zerkle
1999-09-01
A realistic open-cycle gas-core nuclear rocket simulation model must be capable of a self-consistent nozzle calculation in conjunction with coupled radiation and neutron transport in three spatial dimensions. As part of the development effort for such a model, five hydrodynamic codes were used to compare with a converging-diverging nozzle experiment. The codes used in the comparison are CHAD, FLUENT, KIVA2, RAMPANT, and VNAP2. Solution accuracy as a function of mesh size is important because, in the near term, a practical three-dimensional simulation model will require rather coarse zoning across the nozzle throat. In the study, four different grids were considered. (1) coarse, radially uniform grid, (2) coarse, radially nonuniform grid, (3) fine, radially uniform grid, and (4) fine, radially nonuniform grid. The study involves code verification, not prediction. In other words, the authors know the solution they want to match, so they can change methods and/or modify an algorithm to best match this class of problem. In this context, it was necessary to use the higher-order methods in both FLUENT and RAMPANT. In addition, KIVA2 required a modification that allows significantly more accurate solutions for a converging-diverging nozzle. From a predictive point of view, code accuracy with no tuning is an important result. The most accurate codes on a coarse grid, CHAD and VNAP2, did not require any tuning. Their main comparison among the codes was the radial dependence of the Mach number across the nozzle throat. All five codes yielded a very similar solution with fine, radially uniform and radially nonuniform grids. However, the codes yielded significantly different solutions with coarse, radially uniform and radially nonuniform grids. For all the codes, radially nonuniform zoning across the throat significantly increased solution accuracy with a coarse mesh. None of the codes agrees in detail with the weak shock located downstream of the nozzle throat, but all the
Evaluation of a Cray performance tool using a large hydrodynamics code
Lord, K.M.; Simmons, M.L.
1992-06-01
This paper will discuss one of these automatic tools that has been developed recently by Cray Research, Inc. for use on its parallel supercomputer. The tool is called ATEXPERT; when used in conjunction with the Cray Fortran compiling system, CF77, it produces a parallelized version of a code based on loop-level parallelism, plus information to enable the programmer to optimize the parallelized code and improve performance. The information obtained through the use of the tool is presented in an easy-to-read graphical format, making the digestion of such a large quantity of data relatively easy and thus, improving programmer productivity. In this paper we address the issues that we found when the took a large Los Alamos hydrodynamics code, PUEBLO, that was highly vectorizable, but not parallelized, and using ATEXPERT proceeded to parallelize it. We show that through the advice of ATEXPERT, bottlenecks in the code can be found, leading to improved performance. We also show the dependence of performance on problem size, and finally, we contrast the speedup predicted by ATEXPERT with that measured on a dedicated eight-processor Y-MP.
Comparisons of theoretical limits for source coding with practical compression algorithms
NASA Technical Reports Server (NTRS)
Pollara, F.; Dolinar, S.
1992-01-01
The performance achieved by some specific data compression algorithms is compared with absolute limits prescribed by rate distortion theory for Gaussian sources under the mean square error distortion criterion. These results show the gains available from source coding and can be used as a reference for the evaluation of future compression schemes. Some current schemes perform well, but there is still room for improvement.
MULTI2D - a computer code for two-dimensional radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Ramis, R.; Meyer-ter-Vehn, J.; Ramírez, J.
2009-06-01
Simulation of radiation hydrodynamics in two spatial dimensions is developed, having in mind, in particular, target design for indirectly driven inertial confinement energy (IFE) and the interpretation of related experiments. Intense radiation pulses by laser or particle beams heat high-Z target configurations of different geometries and lead to a regime which is optically thick in some regions and optically thin in others. A diffusion description is inadequate in this situation. A new numerical code has been developed which describes hydrodynamics in two spatial dimensions (cylindrical R-Z geometry) and radiation transport along rays in three dimensions with the 4 π solid angle discretized in direction. Matter moves on a non-structured mesh composed of trilateral and quadrilateral elements. Radiation flux of a given direction enters on two (one) sides of a triangle and leaves on the opposite side(s) in proportion to the viewing angles depending on the geometry. This scheme allows to propagate sharply edged beams without ray tracing, though at the price of some lateral diffusion. The algorithm treats correctly both the optically thin and optically thick regimes. A symmetric semi-implicit (SSI) method is used to guarantee numerical stability. Program summaryProgram title: MULTI2D Catalogue identifier: AECV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 151 098 No. of bytes in distributed program, including test data, etc.: 889 622 Distribution format: tar.gz Programming language: C Computer: PC (32 bits architecture) Operating system: Linux/Unix RAM: 2 Mbytes Word size: 32 bits Classification: 19.7 External routines: X-window standard library (libX11.so) and corresponding heading files (X11/*.h) are
Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz
2016-03-01
In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol.
Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging
NASA Astrophysics Data System (ADS)
Diaz, Nelson; Rueda, Hoover; Arguello, Henry
2016-05-01
Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.
Experiences and results multitasking a hydrodynamics code on global and local memory machines
Mandell, D.
1987-01-01
A one-dimensional, time-dependent Lagrangian hydrodynamics code using a Godunov solution method has been multitasked for the Cray X-MP/48, the Intel iPSC hypercube, the Alliant FX series and the IBM RP3 computers. Actual multitasking results have been obtained for the Cray, Intel and Alliant computers and simulated results were obtained for the Cray and RP3 machines. The differences in the methods required to multitask on each of the machines is discussed. Results are presented for a sample problem involving a shock wave moving down a channel. Comparisons are made between theoretical speedups, predicted by Amdahl's law, and the actual speedups obtained. The problems of debugging on the different machines are also described.
Medical Image Compression Using a New Subband Coding Method
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen; Tucker, Doug
1995-01-01
A recently introduced iterative complexity- and entropy-constrained subband quantization design algorithm is generalized and applied to medical image compression. In particular, the corresponding subband coder is used to encode Computed Tomography (CT) axial slice head images, where statistical dependencies between neighboring image subbands are exploited. Inter-slice conditioning is also employed for further improvements in compression performance. The subband coder features many advantages such as relatively low complexity and operation over a very wide range of bit rates. Experimental results demonstrate that the performance of the new subband coder is relatively good, both objectively and subjectively.
Doppler and Band-width Characteristics of Periodic Binary Code Compressed to Several Sub-pulses
NASA Astrophysics Data System (ADS)
Yamashita, Shinichi; Shinriki, Masanori; Susaki, Hironori
The new periodic binary codes compressed to several sub-pulses are shown. The Doppler characteristics and band-width characteristics are studied by using of MATLAB / Simulink. The results are compared with the characteristics of the M-sequence. It is demonstrated the new periodic binary codes have better these characteristics than M-sequences.
AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE
O’Connor, Evan
2015-08-15
We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrino transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.
Ntsama, Eloundou Pascal; Colince, Welba; Ele, Pierre
2016-01-01
In this article, we make a comparative study for a new approach compression between discrete cosine transform (DCT) and discrete wavelet transform (DWT). We seek the transform proper to vector quantization to compress the EMG signals. To do this, we initially associated vector quantization and DCT, then vector quantization and DWT. The coding phase is made by the SPIHT coding (set partitioning in hierarchical trees coding) associated with the arithmetic coding. The method is demonstrated and evaluated on actual EMG data. Objective performance evaluations metrics are presented: compression factor, percentage root mean square difference and signal to noise ratio. The results show that method based on the DWT is more efficient than the method based on the DCT.
FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations
NASA Astrophysics Data System (ADS)
Ding, Jianmin; Lyczkowski, R. W.; Burge, S. W.
1993-02-01
A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations
Ding, Jianmin; Lyczkowski, R.W.; Burge, S.W.
1993-02-01
A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B & W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL`s pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
FORCE2: A state-of-the-art two-phase code for hydrodynamic calculations
Ding, Jianmin; Lyczkowski, R.W. ); Burge, S.W. . Research Center)
1993-02-01
A three-dimensional computer code for two-phase flow named FORCE2 has been developed by Babcock and Wilcox (B W) in close collaboration with Argonne National Laboratory (ANL). FORCE2 is capable of both transient as well as steady-state simulations. This Cartesian coordinates computer program is a finite control volume, industrial grade and quality embodiment of the pilot-scale FLUFIX/MOD2 code and contains features such as three-dimensional blockages, volume and surface porosities to account for various obstructions in the flow field, and distributed resistance modeling to account for pressure drops caused by baffles, distributor plates and large tube banks. Recently computed results demonstrated the significance of and necessity for three-dimensional models of hydrodynamics and erosion. This paper describes the process whereby ANL's pilot-scale FLUFIX/MOD2 models and numerics were implemented into FORCE2. A description of the quality control to assess the accuracy of the new code and the validation using some of the measured data from Illinois Institute of Technology (UT) and the University of Illinois at Urbana-Champaign (UIUC) are given. It is envisioned that one day, FORCE2 with additional modules such as radiation heat transfer, combustion kinetics and multi-solids together with user-friendly pre- and post-processor software and tailored for massively parallel multiprocessor shared memory computational platforms will be used by industry and researchers to assist in reducing and/or eliminating the environmental and economic barriers which limit full consideration of coal, shale and biomass as energy sources, to retain energy security, and to remediate waste and ecological problems.
Multispectral data compression through transform coding and block quantization
NASA Technical Reports Server (NTRS)
Ready, P. J.; Wintz, P. A.
1972-01-01
Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.
Davis, Jean-Paul
2005-03-01
INVICE (INVerse analysis of Isentropic Compression Experiments) is a FORTRAN computer code that implements the inverse finite-difference method to analyze velocity data from isentropic compression experiments. This report gives a brief description of the methods used and the options available in the first beta version of the code, as well as instructions for using the code.
Compressive spectral polarization imaging with coded micropolarizer array
NASA Astrophysics Data System (ADS)
Fu, Chen; Arguello, Henry; Sadler, Brian M.; Arce, Gonzalo R.
2015-05-01
We present a compressive spectral polarization imager based on a prism which is rotated to different angles as the measurement shots are taken, and a colored detector with a micropolarizer array. The prism shears the scene along one spatial axis according to its wavelength components. The scene is then projected to different locations on the detector as measurement shots are taken. Composed of 0°, 45°, 90°, 135° linear micropolarizers, the pixels of the micropolarizer array matched to that of the colored detector, thus the first three Stokes parameters of the scene are compressively sensed. The four dimensional (4D) data cube is thus projected onto the two dimensional (2D) FPA. Designed patterns for the micropolarizer and the colored detector are applied so as to improve the reconstruction problem. The 4D spectral-polarization data cube is reconstructed from the 2D measurements via nonlinear optimization with sparsity constraints. Computer simulations are performed and the performance of designed patterns is compared with random patterns.
Non-US data compression and coding research. FASAC Technical Assessment Report
Gray, R.M.; Cohn, M.; Craver, L.W.; Gersho, A.; Lookabaugh, T.; Pollara, F.; Vetterli, M.
1993-11-01
This assessment of recent data compression and coding research outside the United States examines fundamental and applied work in the basic areas of signal decomposition, quantization, lossless compression, and error control, as well as application development efforts in image/video compression and speech/audio compression. Seven computer scientists and engineers who are active in development of these technologies in US academia, government, and industry carried out the assessment. Strong industrial and academic research groups in Western Europe, Israel, and the Pacific Rim are active in the worldwide search for compression algorithms that provide good tradeoffs among fidelity, bit rate, and computational complexity, though the theoretical roots and virtually all of the classical compression algorithms were developed in the United States. Certain areas, such as segmentation coding, model-based coding, and trellis-coded modulation, have developed earlier or in more depth outside the United States, though the United States has maintained its early lead in most areas of theory and algorithm development. Researchers abroad are active in other currently popular areas, such as quantizer design techniques based on neural networks and signal decompositions based on fractals and wavelets, but, in most cases, either similar research is or has been going on in the United States, or the work has not led to useful improvements in compression performance. Because there is a high degree of international cooperation and interaction in this field, good ideas spread rapidly across borders (both ways) through international conferences, journals, and technical exchanges. Though there have been no fundamental data compression breakthroughs in the past five years--outside or inside the United State--there have been an enormous number of significant improvements in both places in the tradeoffs among fidelity, bit rate, and computational complexity.
F2D users manual: A two-dimensional compressible gas flow code
NASA Astrophysics Data System (ADS)
Suo-Anttila, A.
1993-08-01
The F2D computer code is a general purpose, two-dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-corrector solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicability of the code in problems ranging from free fluid flow, shock tubes, and flow in heated porous media.
F2D users manual: A two-dimensional compressible gas flow code
Suo-Anttila, A.
1993-08-01
The F2D computer code is a general purpose, two-dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-corrector solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicability of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.
SRComp: short read sequence compression using burstsort and Elias omega coding.
Selva, Jeremy John; Chen, Xin
2013-01-01
Next-generation sequencing (NGS) technologies permit the rapid production of vast amounts of data at low cost. Economical data storage and transmission hence becomes an increasingly important challenge for NGS experiments. In this paper, we introduce a new non-reference based read sequence compression tool called SRComp. It works by first employing a fast string-sorting algorithm called burstsort to sort read sequences in lexicographical order and then Elias omega-based integer coding to encode the sorted read sequences. SRComp has been benchmarked on four large NGS datasets, where experimental results show that it can run 5-35 times faster than current state-of-the-art read sequence compression tools such as BEETL and SCALCE, while retaining comparable compression efficiency for large collections of short read sequences. SRComp is a read sequence compression tool that is particularly valuable in certain applications where compression time is of major concern.
Energy requirements for quantum data compression and 1-1 coding
Rallan, Luke; Vedral, Vlatko
2003-10-01
By looking at quantum data compression in the second quantization, we present a model for the efficient generation and use of variable length codes. In this picture, lossless data compression can be seen as the minimum energy required to faithfully represent or transmit classical information contained within a quantum state. In order to represent information, we create quanta in some predefined modes (i.e., frequencies) prepared in one of the two possible internal states (the information carrying degrees of freedom). Data compression is now seen as the selective annihilation of these quanta, the energy of which is effectively dissipated into the environment. As any increase in the energy of the environment is intricately linked to any information loss and is subject to Landauer's erasure principle, we use this principle to distinguish lossless and lossy schemes and to suggest bounds on the efficiency of our lossless compression protocol. In line with the work of Bostroem and Felbinger [Phys. Rev. A 65, 032313 (2002)], we also show that when using variable length codes the classical notions of prefix or uniquely decipherable codes are unnecessarily restrictive given the structure of quantum mechanics and that a 1-1 mapping is sufficient. In the absence of this restraint, we translate existing classical results on 1-1 coding to the quantum domain to derive a new upper bound on the compression of quantum information. Finally, we present a simple quantum circuit to implement our scheme.
A Test Data Compression Scheme Based on Irrational Numbers Stored Coding
Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan
2014-01-01
Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL. PMID:25258744
NASA Astrophysics Data System (ADS)
Wani, Naveel; Maqbool, Bari; Iqbal, Naseer; Misra, Ranjeev
2016-07-01
X-ray binaries and AGNs are powered by accretion discs around compact objects, where the x-rays are emitted from the inner regions and uv emission arise from the relatively cooler outer parts. There has been an increasing evidence that the variability of the x-rays in different timescales is caused by stochastic fluctuations in the accretion disc at different radii. These fluctuations although arise in the outer parts of the disc but propagate inwards to give rise to x-ray variability and hence provides a natural connection between the x-ray and uv variability. There are analytical expressions to qualitatively understand the effect of these stochastic variabilities, but quantitative predictions are only possible by a detailed hydrodynamical study of the global time dependent solution of standard accretion disc. We have developed numerical efficient code (to incorporate all these effects), which considers gas pressure dominated solutions and stochastic fluctuations with the inclusion of boundary effect of the last stable orbit.
Numerical Modeling of Imploding Plasma liners Using the 1D Radiation-Hydrodynamics Code HELIOS
NASA Astrophysics Data System (ADS)
Davis, J. S.; Hanna, D. S.; Awe, T. J.; Hsu, S. C.; Stanic, M.; Cassibry, J. T.; Macfarlane, J. J.
2010-11-01
The Plasma Liner Experiment (PLX) is attempting to form imploding plasma liners to reach 0.1 Mbar upon stagnation, via 30--60 spherically convergent plasma jets. PLX is partly motivated by the desire to develop a standoff driver for magneto-inertial fusion. The liner density, atomic makeup, and implosion velocity will help determine the maximum pressure that can be achieved. This work focuses on exploring the effects of atomic physics and radiation on the 1D liner implosion and stagnation dynamics. For this reason, we are using Prism Computational Science's 1D Lagrangian rad-hydro code HELIOS, which has both equation of state (EOS) table-lookup and detailed configuration accounting (DCA) atomic physics modeling. By comparing a series of PLX-relevant cases proceeding from ideal gas, to EOS tables, to DCA treatments, we aim to identify how and when atomic physics effects are important for determining the peak achievable stagnation pressures. In addition, we present verification test results as well as brief comparisons to results obtained with RAVEN (1D radiation-MHD) and SPHC (smoothed particle hydrodynamics).
NASA Technical Reports Server (NTRS)
Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.
1994-01-01
A two-dimensional computational code, PRLUS2D, which was developed for the reactive propulsive flows of ramjets and scramjets, was validated for two-dimensional shock-wave/turbulent-boundary-layer interactions. The problem of compression corners at supersonic speeds was solved using the RPLUS2D code. To validate the RPLUS2D code for hypersonic speeds, it was applied to a realistic hypersonic inlet geometry. Both the Baldwin-Lomax and the Chien two-equation turbulence models were used. Computational results showed that the RPLUS2D code compared very well with experimentally obtained data for supersonic compression corner flows, except in the case of large separated flows resulting from the interactions between the shock wave and turbulent boundary layer. The computational results compared well with the experiment results in a hypersonic NASA P8 inlet case, with the Chien two-equation turbulence model performing better than the Baldwin-Lomax model.
Ramshaw, J D
2000-10-01
A simple model was recently described for predicting the time evolution of the width of the mixing layer at an unstable fluid interface [J. D. Ramshaw, Phys. Rev. E 58, 5834 (1998); ibid. 61, 5339 (2000)]. The ordinary differential equations of this model have been heuristically generalized into partial differential equations suitable for implementation in multicomponent hydrodynamics codes. The central ingredient in this generalization is a nun-diffusional expression for the species mass fluxes. These fluxes describe the relative motion of the species, and thereby determine the local mixing rate and spatial distribution of mixed fluid as a function of time. The generalized model has been implemented in a two-dimensional hydrodynamics code. The model equations and implementation procedure are summarized, and comparisons with experimental mixing data are presented.
A lossless compression method for medical image sequences using JPEG-LS and interframe coding.
Miaou, Shaou-Gang; Ke, Fu-Sheng; Chen, Shu-Ching
2009-09-01
Hospitals and medical centers produce an enormous amount of digital medical images every day, especially in the form of image sequences, which requires considerable storage space. One solution could be the application of lossless compression. Among available methods, JPEG-LS has excellent coding performance. However, it only compresses a single picture with intracoding and does not utilize the interframe correlation among pictures. Therefore, this paper proposes a method that combines the JPEG-LS and an interframe coding with motion vectors to enhance the compression performance of using JPEG-LS alone. Since the interframe correlation between two adjacent images in a medical image sequence is usually not as high as that in a general video image sequence, the interframe coding is activated only when the interframe correlation is high enough. With six capsule endoscope image sequences under test, the proposed method achieves average compression gains of 13.3% and 26.3% over the methods of using JPEG-LS and JPEG2000 alone, respectively. Similarly, for an MRI image sequence, coding gains of 77.5% and 86.5% are correspondingly obtained.
NASA Astrophysics Data System (ADS)
Chouakri, S. A.; Djaafri, O.; Taleb-Ahmed, A.
2013-08-01
We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly.
Hierarchical prediction and context adaptive coding for lossless color image compression.
Kim, Seyun; Cho, Nam Ik
2014-01-01
This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.
Ultraspectral sounder data compression using the non-exhaustive Tunstall coding
NASA Astrophysics Data System (ADS)
Wei, Shih-Chieh; Huang, Bormin
2008-08-01
With its bulky volume, the ultraspectral sounder data might still suffer a few bits of error after channel coding. Therefore it is beneficial to incorporate some mechanism in source coding for error containment. The Tunstall code is a variable-to- fixed length code which can reduce the error propagation encountered in fixed-to-variable length codes like Huffman and arithmetic codes. The original Tunstall code uses an exhaustive parse tree where internal nodes extend every symbol in branching. It might result in assignment of precious codewords to less probable parse strings. Based on an infinitely extended parse tree, a modified Tunstall code is proposed which grows an optimal non-exhaustive parse tree by assigning the complete codewords only to top probability nodes in the infinite tree. Comparison will be made among the original exhaustive Tunstall code, our modified non-exhaustive Tunstall code, the CCSDS Rice code, and JPEG-2000 in terms of compression ratio and percent error rate using the ultraspectral sounder data.
Split field coding: low complexity error-resilient entropy coding for image compression
NASA Astrophysics Data System (ADS)
Meany, James J.; Martens, Christopher J.
2008-08-01
In this paper, we describe split field coding, an approach for low complexity, error-resilient entropy coding which splits code words into two fields: a variable length prefix and a fixed length suffix. Once a prefix has been decoded correctly, then the associated fixed length suffix is error-resilient, with bit errors causing no loss of code word synchronization and only a limited amount of distortion on the decoded value. When the fixed length suffixes are segregated to a separate block, this approach becomes suitable for use with a variety of methods which provide varying protection to different portions of the bitstream, such as unequal error protection or progressive ordering schemes. Split field coding is demonstrated in the context of a wavelet-based image codec, with examples of various error resilience properties, and comparisons to the rate-distortion and computational performance of JPEG 2000.
Data compression in wireless sensors network using MDCT and embedded harmonic coding.
Alsalaet, Jaafar K; Ali, Abduladhem A
2015-05-01
One of the major applications of wireless sensors networks (WSNs) is vibration measurement for the purpose of structural health monitoring and machinery fault diagnosis. WSNs have many advantages over the wired networks such as low cost and reduced setup time. However, the useful bandwidth is limited, as compared to wired networks, resulting in relatively low sampling. One solution to this problem is data compression which, in addition to enhancing sampling rate, saves valuable power of the wireless nodes. In this work, a data compression scheme, based on Modified Discrete Cosine Transform (MDCT) followed by Embedded Harmonic Components Coding (EHCC) is proposed to compress vibration signals. The EHCC is applied to exploit harmonic redundancy present is most vibration signals resulting in improved compression ratio. This scheme is made suitable for the tiny hardware of wireless nodes and it is proved to be fast and effective. The efficiency of the proposed scheme is investigated by conducting several experimental tests.
F2D. A Two-Dimensional Compressible Gas Flow Code
Suo-Anttila, A.
1993-08-01
F2D is a general purpose, two dimensional, fully compressible thermal-fluids code that models most of the phenomena found in situations of coupled fluid flow and heat transfer. The code solves momentum, continuity, gas-energy, and structure-energy equations using a predictor-correction solution algorithm. The corrector step includes a Poisson pressure equation. The finite difference form of the equation is presented along with a description of input and output. Several example problems are included that demonstrate the applicability of the code in problems ranging from free fluid flow, shock tubes and flow in heated porous media.
Raeiatibanadkooki, Mahsa; Quchani, Saeed Rahati; KhalilZade, MohammadMahdi; Bahaadinbeigy, Kambiz
2016-03-01
In mobile health care monitoring, compression is an essential tool for solving storage and transmission problems. The important issue is able to recover the original signal from the compressed signal. The main purpose of this paper is compressing the ECG signal with no loss of essential data and also encrypting the signal to keep it confidential from everyone, except for physicians. In this paper, mobile processors are used and there is no need for any computers to serve this purpose. After initial preprocessing such as removal of the baseline noise, Gaussian noise, peak detection and determination of heart rate, the ECG signal is compressed. In compression stage, after 3 steps of wavelet transform (db04), thresholding techniques are used. Then, Huffman coding with chaos for compression and encryption of the ECG signal are used. The compression rates of proposed algorithm is 97.72 %. Then, the ECG signals are sent to a telemedicine center to acquire specialist diagnosis by TCP/IP protocol. PMID:26779641
NASA Astrophysics Data System (ADS)
Wongwathanarat, A.; Grimm-Strele, H.; Müller, E.
2016-10-01
We present a new fourth-order, finite-volume hydrodynamics code named Apsara. The code employs a high-order, finite-volume method for mapped coordinates with extensions for nonlinear hyperbolic conservation laws. Apsara can handle arbitrary structured curvilinear meshes in three spatial dimensions. The code has successfully passed several hydrodynamic test problems, including the advection of a Gaussian density profile and a nonlinear vortex and the propagation of linear acoustic waves. For these test problems, Apsara produces fourth-order accurate results in case of smooth grid mappings. The order of accuracy is reduced to first-order when using the nonsmooth circular grid mapping. When applying the high-order method to simulations of low-Mach number flows, for example, the Gresho vortex and the Taylor-Green vortex, we discover that Apsara delivers superior results to codes based on the dimensionally split, piecewise parabolic method (PPM) widely used in astrophysics. Hence, Apsara is a suitable tool for simulating highly subsonic flows in astrophysics. In the first astrophysical application, we perform implicit large eddy simulations (ILES) of anisotropic turbulence in the context of core collapse supernova (CCSN) and obtain results similar to those previously reported.
NASA Astrophysics Data System (ADS)
Sijoy, C. D.; Chaturvedi, S.
2016-06-01
Higher-order cell-centered multi-material hydrodynamics (HD) and parallel node-centered radiation transport (RT) schemes are combined self-consistently in three-temperature (3T) radiation hydrodynamics (RHD) code TRHD (Sijoy and Chaturvedi, 2015) developed for the simulation of intense thermal radiation or high-power laser driven RHD. For RT, a node-centered gray model implemented in a popular RHD code MULTI2D (Ramis et al., 2009) is used. This scheme, in principle, can handle RT in both optically thick and thin materials. The RT module has been parallelized using message passing interface (MPI) for parallel computation. Presently, for multi-material HD, we have used a simple and robust closure model in which common strain rates to all materials in a mixed cell is assumed. The closure model has been further generalized to allow different temperatures for the electrons and ions. In addition to this, electron and radiation temperatures are assumed to be in non-equilibrium. Therefore, the thermal relaxation between the electrons and ions and the coupling between the radiation and matter energies are required to be computed self-consistently. This has been achieved by using a node-centered symmetric-semi-implicit (SSI) integration scheme. The electron thermal conduction is calculated using a cell-centered, monotonic, non-linear finite volume scheme (NLFV) suitable for unstructured meshes. In this paper, we have described the details of the 2D, 3T, non-equilibrium, multi-material RHD code developed with a special attention to the coupling of various cell-centered and node-centered formulations along with a suite of validation test problems to demonstrate the accuracy and performance of the algorithms. We also report the parallel performance of RT module. Finally, in order to demonstrate the full capability of the code implementation, we have presented the simulation of laser driven shock propagation in a layered thin foil. The simulation results are found to be in good
1986-12-01
Version 00 The MEDUSA-IB code performs implosion and thermonuclear burn calculations of an ion beam driven ICF target, based on one-dimensional plasma hydrodynamics and transport theory. It can calculate the following values in spherical geometry through the progress of implosion and fuel burnup of a multi-layered target. (1) Hydrodynamic velocities, density, ion, electron and radiation temperature, radiation energy density, Rs and burn rate of target as a function of coordinates and time, (2) Fusion gainmore » as a function of time, (3) Ionization degree, (4) Temperature dependent ion beam energy deposition, (5) Radiation, -particle and neutron spectra as a function of time.« less
NASA Astrophysics Data System (ADS)
Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok
2005-08-01
Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of
ECG compression using uniform scalar dead-zone quantization and conditional entropy coding.
Chen, Jianhua; Wang, Fuyan; Zhang, Yufeng; Shi, Xinling
2008-05-01
A new wavelet-based method for the compression of electrocardiogram (ECG) data is presented. A discrete wavelet transform (DWT) is applied to the digitized ECG signal. The DWT coefficients are first quantized with a uniform scalar dead-zone quantizer, and then the quantized coefficients are decomposed into four symbol streams, representing a binary significance stream, the signs, the positions of the most significant bits, and the residual bits. An adaptive arithmetic coder with several different context models is employed for the entropy coding of these symbol streams. Simulation results on several records from the MIT-BIH arrhythmia database show that the proposed coding algorithm outperforms some recently developed ECG compression algorithms.
Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.
Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph
2016-04-18
We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos.
Lossy compression of MERIS superspectral images with exogenous quasi optimal coding transforms
NASA Astrophysics Data System (ADS)
Akam Bita, Isidore Paul; Barret, Michel; Dalla Vedova, Florio; Gutzwiller, Jean-Louis
2009-08-01
Our research focuses on reducing complexity of hyperspectral image codecs based on transform and/or subband coding, so they can be on-board a satellite. It is well-known that the Karhunen-Loève Transform (KLT) can be sub-optimal in transform coding for non Gaussian data. However, it is generally recommended as the best calculable linear coding transform in practice. Now, the concept and the computation of optimal coding transforms (OCT), under low restrictive hypotheses at high bit-rates, were carried out and adapted to a compression scheme compatible with both the JPEG2000 Part2 standard and the CCSDS recommendations for on-board satellite image compression, leading to the concept and computation of Optimal Spectral Transforms (OST). These linear transforms are optimal for reducing spectral redundancies of multi- or hyper-spectral images, when the spatial redundancies are reduced with a fixed 2-D Discrete Wavelet Transform (DWT). The problem of OST is their heavy computational cost. In this paper we present the performances in coding of a quasi optimal spectral transform, called exogenous OrthOST, obtained by learning an orthogonal OST on a sample of superspectral images from the spectrometer MERIS. The performances are presented in terms of bit-rate versus distortion for four various distortions and compared to the ones of the KLT. We observe good performances of the exogenous OrthOST, as it was the case on Hyperion hyper-spectral images in previous works.
NASA Technical Reports Server (NTRS)
Rice, R. F.
1974-01-01
End-to-end system considerations involving channel coding and data compression are reported which could drastically improve the efficiency in communicating pictorial information from future planetary spacecraft. In addition to presenting new and potentially significant system considerations, this report attempts to fill a need for a comprehensive tutorial which makes much of this very subject accessible to readers whose disciplines lie outside of communication theory.
Recent Hydrodynamics Improvements to the RELAP5-3D Code
Richard A. Riemke; Cliff B. Davis; Richard.R. Schultz
2009-07-01
The hydrodynamics section of the RELAP5-3D computer program has been recently improved. Changes were made as follows: (1) improved turbine model, (2) spray model for the pressurizer model, (3) feedwater heater model, (4) radiological transport model, (5) improved pump model, and (6) compressor model.
Edwards, M J; Hansen, J; Miles, A R; Froula, D; Gregori, G; Glenzer, S; Edens, A; Dittmire, T
2005-02-08
The possibility of studying compressible turbulent flows using gas targets driven by high power lasers and diagnosed with optical techniques is investigated. The potential advantage over typical laser experiments that use solid targets and x-ray diagnostics is more detailed information over a larger range of spatial scales. An experimental system is described to study shock - jet interactions at high Mach number. This consists of a mini-chamber full of nitrogen at a pressure {approx} 1 atms. The mini-chamber is situated inside a much larger vacuum chamber. An intense laser pulse ({approx}100J in {approx} 5ns) is focused on to a thin {approx} 0.3{micro}m thick silicon nitride window at one end of the mini-chamber. The window acts both as a vacuum barrier, and laser entrance hole. The ''explosion'' caused by the deposition of the laser energy just inside the window drives a strong blast wave out into the nitrogen atmosphere. The spherical shock expands and interacts with a jet of xenon introduced though the top of the mini-chamber. The Mach number of the interaction is controlled by the separation of the jet from the explosion. The resulting flow is visualized using an optical schlieren system using a pulsed laser source at a wavelength of 0.53 {micro}m. The technical path leading up to the design of this experiment is presented, and future prospects briefly considered. Lack of laser time in the final year of the project severely limited experimental results obtained using the new apparatus.
Analysis of Doppler Effect on the Pulse Compression of Different Codes Emitted by an Ultrasonic LPS
Paredes, José A.; Aguilera, Teodoro; Álvarez, Fernando J.; Lozano, Jesús; Morera, Jorge
2011-01-01
This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an Ultrasonic Local Positioning System. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver. PMID:22346670
Analysis of Doppler effect on the pulse compression of different codes emitted by an ultrasonic LPS.
Paredes, José A; Aguilera, Teodoro; Alvarez, Fernando J; Lozano, Jesús; Morera, Jorge
2011-01-01
This work analyses the effect of the receiver movement on the detection by pulse compression of different families of codes characterizing the emissions of an ultrasonic local positioning system. Three families of codes have been compared: Kasami, Complementary Sets of Sequences and Loosely Synchronous, considering in all cases three different lengths close to 64, 256 and 1,024 bits. This comparison is first carried out by using a system model in order to obtain a set of results that are then experimentally validated with the help of an electric slider that provides radial speeds up to 2 m/s. The performance of the codes under analysis has been characterized by means of the auto-correlation and cross-correlation bounds. The results derived from this study should be of interest to anyone performing matched filtering of ultrasonic signals with a moving emitter/receiver.
NASA Astrophysics Data System (ADS)
Martinoty, P.; Gallani, J. L.; Collin, D.
1998-07-01
We present dynamic compression experiments which show that layer-compression modulus B does not exhibit dispersion in the 100 Hz-1 kHz domain and varies with temperature according to a simple power law over four decades. These results indicate that the saturation effect predicted by Nelson and Toner does not exist for this compound, and that the thermal variation of B determined at higher frequencies by a second-sound technique is not hydrodynamic, contrary to what is claimed. We show that it is affected by two relaxation mechanisms.
Adaptive variable-length coding for efficient compression of spacecraft television data.
NASA Technical Reports Server (NTRS)
Rice, R. F.; Plaunt, J. R.
1971-01-01
An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.
Cholla: 3D GPU-based hydrodynamics code for astrophysical simulation
NASA Astrophysics Data System (ADS)
Schneider, Evan E.; Robertson, Brant E.
2016-07-01
Cholla (Computational Hydrodynamics On ParaLLel Architectures) models the Euler equations on a static mesh and evolves the fluid properties of thousands of cells simultaneously using GPUs. It can update over ten million cells per GPU-second while using an exact Riemann solver and PPM reconstruction, allowing computation of astrophysical simulations with physically interesting grid resolutions (>256^3) on a single device; calculations can be extended onto multiple devices with nearly ideal scaling beyond 64 GPUs.
McKay, M.W.
1982-06-01
STEALTH is a family of computer codes that solve the equations of motion for a general continuum. These codes can be used to calculate a variety of physical processes in which the dynamic behavior of a continuum is involved. The versions of STEALTH described in this volume were designed for the calculation of problems involving low-speed fluid flow. They employ an implicit finite difference technique to solve the one- and two-dimensional equations of motion, written for an arbitrary coordinate system, for both incompressible and compressible fluids. The solution technique involves an iterative solution of the implicit, Lagrangian finite difference equations. Convection terms that result from the use of an arbitrarily-moving coordinate system are calculated separately. This volume provides the theoretical background, the finite difference equations, and the input instructions for the one- and two-dimensional codes; a discussion of several sample problems; and a listing of the input decks required to run those problems.
Single stock dynamics on high-frequency data: from a compressed coding perspective.
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.
NASA Astrophysics Data System (ADS)
Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.
2015-03-01
Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.
Compact all-CMOS spatiotemporal compressive sensing video camera with pixel-wise coded exposure.
Zhang, Jie; Xiong, Tao; Tran, Trac; Chin, Sang; Etienne-Cummings, Ralph
2016-04-18
We present a low power all-CMOS implementation of temporal compressive sensing with pixel-wise coded exposure. This image sensor can increase video pixel resolution and frame rate simultaneously while reducing data readout speed. Compared to previous architectures, this system modulates pixel exposure at the individual photo-diode electronically without external optical components. Thus, the system provides reduction in size and power compare to previous optics based implementations. The prototype image sensor (127 × 90 pixels) can reconstruct 100 fps videos from coded images sampled at 5 fps. With 20× reduction in readout speed, our CMOS image sensor only consumes 14μW to provide 100 fps videos. PMID:27137331
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald
2010-07-01
We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the "ray-by-ray plus" approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in
Mueller, Bernhard; Janka, Hans-Thomas; Dimmelmeier, Harald E-mail: thj@mpa-garching.mpg.d
2010-07-15
We present a new general relativistic code for hydrodynamical supernova simulations with neutrino transport in spherical and azimuthal symmetry (one dimension and two dimensions, respectively). The code is a combination of the COCONUT hydro module, which is a Riemann-solver-based, high-resolution shock-capturing method, and the three-flavor, fully energy-dependent VERTEX scheme for the transport of massless neutrinos. VERTEX integrates the coupled neutrino energy and momentum equations with a variable Eddington factor closure computed from a model Boltzmann equation and uses the 'ray-by-ray plus' approximation in two dimensions, assuming the neutrino distribution to be axially symmetric around the radial direction at every point in space, and thus the neutrino flux to be radial. Our spacetime treatment employs the Arnowitt-Deser-Misner 3+1 formalism with the conformal flatness condition for the spatial three metric. This approach is exact for the one-dimensional case and has previously been shown to yield very accurate results for spherical and rotational stellar core collapse. We introduce new formulations of the energy equation to improve total energy conservation in relativistic and Newtonian hydro simulations with grid-based Eulerian finite-volume codes. Moreover, a modified version of the VERTEX scheme is developed that simultaneously conserves energy and lepton number in the neutrino transport with better accuracy and higher numerical stability in the high-energy tail of the spectrum. To verify our code, we conduct a series of tests in spherical symmetry, including a detailed comparison with published results of the collapse, shock formation, shock breakout, and accretion phases. Long-time simulations of proto-neutron star cooling until several seconds after core bounce both demonstrate the robustness of the new COCONUT-VERTEX code and show the approximate treatment of relativistic effects by means of an effective relativistic gravitational potential as in
Hallquist, J.O.
1982-02-01
This revised report provides an updated user's manual for DYNA2D, an explicit two-dimensional axisymmetric and plane strain finite element code for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. A contact-impact algorithm permits gaps and sliding along material interfaces. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 4-node solid elements, and the equations-of motion are integrated by the central difference method. An interactive rezoner eliminates the need to terminate the calculation when the mesh becomes too distorted. Rather, the mesh can be rezoned and the calculation continued. The command structure for the rezoner is described and illustrated by an example.
NASA Astrophysics Data System (ADS)
Sigalotti, L. Di G.; Klapp, J.
1997-03-01
A new second-order Eulerian code is compared with a version of the TREESPH code formulated by Hernquist and Katz (1989) for the standard isothermal collapse test. The results indicate that both codes produce a very similar evolution ending with the formation of a protostellar binary system. Contrary to previous first-order calculations, the binary forms by direct fragmentation, i.e. without the occurrence of an intermediate bar configuration. A similar trend was also found in second-order Eulerian calculations (Myhill and Boss 1993), suggesting that it is a result of the decreased numerical diffusion associated with the new second-order schemes. The results have also implications on the differences between the finite difference methods and the particle method SPH, raised by Monaghan and Lattanzio (1986) for this problem. In particular, the Eulerian calculation does not result in a run-away collapse of the fragments, and as found in the TREESPH evolution, they also show a clear tendency to get closer together. In agreement with previous SPH calculations (Monaghan and Lattanzio 1986), the results of the long term evolution with code TREESPH show that the gravitational interaction between the two fragments may become important, and eventually induce the binary to coalesce. However, SPH calculations by Bate, Bonnell and Price (1995) indicate that the two fragments, after having reached a minimum separation distance, do not merge but continue to orbit each other.
NASA Astrophysics Data System (ADS)
Di G. Sigalotti, L.; Klapp, J.
1997-03-01
A new second-order Eulerian code is compared with a version of the TREESPH code formulated by Hernquist & Katz (1989ApJS...70..419H) for the standard isothermal collapse test. The results indicate that both codes produce a very similar evolution ending with the formation of a protostellar binary system. Contrary to previous first-order calculations, the binary forms by direct fragmentation, i.e., without the occurrence of an intermediate bar configuration. A similar trend was also found in recent second-order Eulerian calculations (Myhill & Boss 1993ApJS...89..345M), suggesting that it is a result of the decreased numerical diffusion associated with the new second-order schemes. The results have also implications on the differences between the finite difference methods and the particle method SPH, raised by Monaghan & Lattanzio (1986A&A...158..207M) for this problem. In particular, the Eulerian calculation does not result in a run-away collapse of the fragments, and as found in the TREESPH evolution, they also show a clear tendency to get closer together. In agreement with previous SPH calculations (Monaghan & Lattanzio 1986A&A...158..207M), the results of the long term evolution with code TREESPH show that the gravitational interaction between the two fragments may become important, and eventually induce the binary to coalesce. However, most recent SPH calculations (Bate, Bonnell & Price 1995MNRAS.277..362B ) indicate that the two fragments, after having reached a minimum separation distance, do not merge but continue to orbit each other.
NASA Astrophysics Data System (ADS)
Freytag, Bernd; Steffen, Matthias; Wedemeyer-Böhm, Sven; Ludwig, Hans-Günter; Leenaarts, Jorrit; Schaffenberger, Werner; Allard, France; Chiavassa, Andrea; Höfner, Susanne; Kamp, Inga; Steiner, Oskar
2010-11-01
CO5BOLD - nickname COBOLD - is the short form of "COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3''. It is used to model solar and stellar surface convection. For solar-type stars only a small fraction of the stellar surface layers are included in the computational domain. In the case of red supergiants the computational box contains the entire star. Recently, the model range has been extended to sub-stellar objects (brown dwarfs). CO5BOLD solves the coupled non-linear equations of compressible hydrodynamics in an external gravity field together with non-local frequency-dependent radiation transport. Operator splitting is applied to solve the equations of hydrodynamics (including gravity), the radiative energy transfer (with a long-characteristics or a short-characteristics ray scheme), and possibly additional 3D (turbulent) diffusion in individual sub steps. The 3D hydrodynamics step is further simplified with directional splitting (usually). The 1D sub steps are performed with a Roe solver, accounting for an external gravity field and an arbitrary equation of state from a table. The radiation transport is computed with either one of three modules: MSrad module: It uses long characteristics. The lateral boundaries have to be periodic. Top and bottom can be closed or open ("solar module''). LHDrad module: It uses long characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (old "supergiant module''). SHORTrad module: It uses short characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (new "supergiant module''). The code was supplemented with an (optional) MHD version [Schaffenberger et al. (2005)] that can treat magnetic fields. There are also modules for the formation and advection of dust available. The current version now contains the treatment of chemical reaction networks, mostly used for the formation of molecules [Wedemeyer
NASA Astrophysics Data System (ADS)
Takabe, Hideaki
2004-12-01
The author reviews fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first is the ablative stabilization of a Rayleigh Taylor instability at an ablation front and its dispersion relation, the so-called Takabe formula. This formula gives a principal guideline for stable target design and is also applied to studying the turbulent combustion wave in Type Ia supernova explosions. The second is the development of the integrated code ILESTA. The physics of an integrated code, ILESTA (one- and two-dimensional), and analyses and design of laser produced plasmas including the implosion dynamics and stability are reviewed. There are two areas on its applications to implosion analysis: one is an evaluation of mixing layers in one-dimensional implosions by coupling with one- and two-dimensional ILESTA and the other is an extension to include the k ɛ type turbulent mixing model, where the details of the formulation are given. The third topic is laboratory astrophysics with intense lasers. This consists of two parts: one is a review of its historical background and the other is on how we relate laser plasmas to wide-ranging astrophysics and the purposes of promoting such research. The fourth topic is on anomalous transport of relativistic electrons in fast ignition laser fusion and its relation to self-organization of magnetic field generation in gamma-ray bursts at cosmological distances. Finally, recent activity related to the application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics is briefly explained.
NASA Technical Reports Server (NTRS)
Malik, M. R.
1982-01-01
A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.
RAyMOND: an N-body and hydrodynamics code for MOND
NASA Astrophysics Data System (ADS)
Candlish, G. N.; Smith, R.; Fellhauer, M.
2015-01-01
The Λ cold dark matter (ΛCDM) concordance cosmological model is supported by a wealth of observational evidence, particularly on large scales. At galactic scales, however, the model is poorly constrained and recent observations suggest a more complex behaviour in the dark sector than may be accommodated by a single CDM component. Furthermore, a modification of the gravitational force in the very weak field regime may account for at least some of the phenomenology of dark matter. A well-known example of such an approach is MOdified Newtonian Dynamics (MOND). While this idea has proven remarkably successful in the context of stellar dynamics in individual galaxies, the effects of such a modification of gravity on galaxy interactions and environmental processes deserve further study. To explore this arena, we modify the parallel adaptive mesh refinement code RAMSES to use two formulations of MOND. We implement both the fully non-linear aquadratic Lagrangian formulation and the simpler quasi-linear formulation. The relevant modifications necessary for the Poisson solver in RAMSES are discussed in detail. Using idealized tests, in both serial and parallel runs, we demonstrate the effectiveness of the code.
Using Hydrodynamic Codes in Modeling of Multi-Interface Diverging Experiments for NIF
NASA Astrophysics Data System (ADS)
Grosskopf, Michael; Drake, R. P.; Kuranz, C. C.; Plewa, T.; Hearn, N.; Meakin, C.; Arnett, D.; Miles, A. R.; Robey, H. F.; Hansen, J. F.; Remington, B. A.; Hsing, W.; Edwards, M. J.
2008-04-01
Using the Omega Laser, researchers studying supernova dynamics have observed the growth of Rayleigh-Taylor instabilities in a high energy density system. The NIF laser hopes to generate the energy needed to expand these experiments to a diverging system. We report scaling simulations to model the interface dynamics of a multilayered, diverging Rayleigh-Taylor experiment for NIF using CALE, a hybrid adaptive Lagrangian-Eulerian code developed at LLNL. Specifically, we looked both qualitatively and quantitatively at the Rayleigh-Taylor growth and multi-interface interactions in mass-scaled systems using different materials. The simulations will assist in the target design process and help choose diagnostics to maximize the information we receive in a particular shot. Simulations are critical for experimental planning, especially for experiments on large-scale facilities.
NASA Astrophysics Data System (ADS)
Milovich, J. L.; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.
2015-12-01
Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm2, but with significantly lower total neutron yields (between 1.5 × 1014 and 5.5 × 1014) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the "high-foot" experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3-10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm2. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.
Milovich, J. L. Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R.
2015-12-15
Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.
Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes
NASA Astrophysics Data System (ADS)
Sanal, M.; Kuloor, R.; Sagayaraj, M. J.
In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.
Worst configurations (instantons) for compressed sensing over reals: a channel coding approach
Chertkov, Michael; Chilappagari, Shashi K; Vasic, Bane
2010-01-01
We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.
NASA Astrophysics Data System (ADS)
Cohen, Randi L.
There is both theoretical and observational evidence that giant planets collided with objects ≥ Mearth during their evolution. These impacts may play a key role in giant planet formation. This paper describes impacts of a ˜ Earth-mass object onto a suite of proto-giant-planets, as simulated using an SPH parallel tree code. We run 6 simulations, varying the impact angle and evolutionary stage of the proto-Jupiter. We find that it is possible for an impactor to free some mass from the core of the proto-planet it impacts through direct collision, as well as to make physical contact with the core yet escape partially, or even completely, intact. None of the 6 cases we consider produced a solid disk or resulted in a net decrease in the core mass of the pinto-planet (since the mass decrease due to disruption was outweighed by the increase due to the addition of the impactor's mass to the core). However, we suggest parameters which may have these effects, and thus decrease core mass and formation time in protoplanetary models and/or create satellite systems. We find that giant impacts can remove significant envelope mass from forming giant planets, leaving only 2 MEarth of gas, similar to Uranus and Neptune. They can also create compositional inhomogeneities in planetary cores, which creates differences in planetary thermal emission characteristics.
NASA Astrophysics Data System (ADS)
Cohen, R.; Bodenheimer, P.; Asphaug, E.
2000-12-01
There is both theoretical and observational evidence that giant planets collided with objects with mass >= Mearth during their evolution. These impacts may help shorten planetary formation timescales by changing the opacity of the planetary atmosphere to allow quicker cooling. They may also redistribute heavy metals within giant planets, affect the core/envelope mass ratio, and help determine the ratio of emitted to absorbed energy within giant planets. Thus, the researchers propose to simulate the impact of a ~ Earth-mass object onto a proto-giant-planet with SPH. Results of the SPH collision models will be input into a steady-state planetary evolution code and the effect of impacts on formation timescales, core/envelope mass ratios, density profiles, and thermal emissions of giant planets will be quantified. The collision will be modelled using a modified version of an SPH routine which simulates the collision of two polytropes. The Saumon-Chabrier and Tillotson equations of state will replace the polytropic equation of state. The parallel tree algorithm of Olson & Packer will be used for the domain decomposition and neighbor search necessary to calculate pressure and self-gravity efficiently. This work is funded by the NASA Graduate Student Researchers Program.
NASA Astrophysics Data System (ADS)
Jun, Xie Cheng; Su, Yan; Wei, Zhang
2006-08-01
In this paper, a modified algorithm was introduced to improve Rice coding algorithm and researches of image compression with the CDF (2,2) wavelet lifting scheme was made. Our experiments show that the property of the lossless image compression is much better than Huffman, Zip, lossless JPEG, RAR, and a little better than (or equal to) the famous SPIHT. The lossless compression rate is improved about 60.4%, 45%, 26.2%, 16.7%, 0.4% on average. The speed of the encoder is faster about 11.8 times than the SPIHT's and its efficiency in time can be improved by 162%. The speed of the decoder is faster about 12.3 times than that of the SPIHT's and its efficiency in time can be rasied about 148%. This algorithm, instead of largest levels wavelet transform, has high coding efficiency when the wavelet transform levels is larger than 3. For the source model of distributions similar to the Laplacian, it can improve the efficiency of coding and realize the progressive transmit coding and decoding.
Solwnd: A 3D Compressible MHD Code for Solar Wind Studies. Version 1.0: Cartesian Coordinates
NASA Technical Reports Server (NTRS)
Deane, Anil E.
1996-01-01
Solwnd 1.0 is a three-dimensional compressible MHD code written in Fortran for studying the solar wind. Time-dependent boundary conditions are available. The computational algorithm is based on Flux Corrected Transport and the code is based on the existing code of Zalesak and Spicer. The flow considered is that of shear flow with incoming flow that perturbs this base flow. Several test cases corresponding to pressure balanced magnetic structures with velocity shear flow and various inflows including Alfven waves are presented. Version 1.0 of solwnd considers a rectangular Cartesian geometry. Future versions of solwnd will consider a spherical geometry. Some discussions of this issue is presented.
NASA Astrophysics Data System (ADS)
Baiotti, Luca; Shibata, Masaru; Yamamoto, Tetsuro
2010-09-01
We present the first quantitative comparison of two independent general-relativistic hydrodynamics codes, the whisky code and the sacra code. We compare the output of simulations starting from the same initial data and carried out with the configuration (numerical methods, grid setup, resolution, gauges) which for each code has been found to give consistent and sufficiently accurate results, in particular, in terms of cleanness of gravitational waveforms. We focus on the quantities that should be conserved during the evolution (rest mass, total mass energy, and total angular momentum) and on the gravitational-wave amplitude and frequency. We find that the results produced by the two codes agree at a reasonable level, with variations in the different quantities but always at better than about 10%.
Durisen, R.H.; Gingold, R.A.; Tohline, J.E.; Boss, A.P.
1986-06-01
The effectiveness of three different hydrodynamics models is evaluated for the analysis of the effects of fission instabilities in rapidly rotating, equilibrium flows. The instabilities arise in nonaxisymmetric Kelvin modes as rotational energy in the flow increases, which may occur in the formation of close binary stars and planets when the fluid proto-object contracts quasi-isostatically. Two finite-difference, donor-cell methods and a smoothed particle hydrodynamics (SPH) code are examined, using a polytropic index of 3/2 and ratios of total rotational kinetic energy to gravitational energy of 0.33 and 0.38. The models show that dynamic bar instabilities with the 3/2 polytropic index do not yield detached binaries and multiple systems. Ejected mass and angular momentum form two trailing spiral arms that become a disk or ring around the central remnant. The SPH code yields the same data as the finite difference codes but with less computational effort and without acceptable fluid constraints in low density regions. Methods for improving both types of codes are discussed. 68 references.
Shestakov, Aleksei I. Offner, Stella S.R.
2008-01-10
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({psi}tc). We analyze the magnitude of the {psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates
Shestakov, A I; Offner, S R
2007-03-02
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates
Shestakov, A I; Offner, S R
2006-09-21
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with adaptive mesh refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate 'level-solve' packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation ({Psi}tc). We analyze the magnitude of the {Psi}tc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the 'partial temperature' scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of {Psi}tc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates
NASA Astrophysics Data System (ADS)
Shestakov, Aleksei I.; Offner, Stella S. R.
2008-01-01
We present a scheme to solve the nonlinear multigroup radiation diffusion (MGD) equations. The method is incorporated into a massively parallel, multidimensional, Eulerian radiation-hydrodynamic code with Adaptive Mesh Refinement (AMR). The patch-based AMR algorithm refines in both space and time creating a hierarchy of levels, coarsest to finest. The physics modules are time-advanced using operator splitting. On each level, separate "level-solve" packages advance the modules. Our multigroup level-solve adapts an implicit procedure which leads to a two-step iterative scheme that alternates between elliptic solves for each group with intra-cell group coupling. For robustness, we introduce pseudo transient continuation (Ψtc). We analyze the magnitude of the Ψtc parameter to ensure positivity of the resulting linear system, diagonal dominance and convergence of the two-step scheme. For AMR, a level defines a subdomain for refinement. For diffusive processes such as MGD, the refined level uses Dirichlet boundary data at the coarse-fine interface and the data is derived from the coarse level solution. After advancing on the fine level, an additional procedure, the sync-solve (SS), is required in order to enforce conservation. The MGD SS reduces to an elliptic solve on a combined grid for a system of G equations, where G is the number of groups. We adapt the "partial temperature" scheme for the SS; hence, we reuse the infrastructure developed for scalar equations. Results are presented. We consider a multigroup test problem with a known analytic solution. We demonstrate utility of Ψtc by running with increasingly larger timesteps. Lastly, we simulate the sudden release of energy Y inside an Al sphere (r = 15 cm) suspended in air at STP. For Y = 11 kT, we find that gray radiation diffusion and MGD produce similar results. However, if Y = 1 MT, the two packages yield different results. Our large Y simulation contradicts a long-standing theory and demonstrates the
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
Watkins, J.C.
1990-09-01
This report describes an update of a computer program which operates on hydrodynamic output from the RELAPS/MOD3 program and computes piping hydrodynamic force/time histories for input into various structural analysis codes. This version of the program is compatible with RELAPS/MOD3 and the Micro Vax computing environment whereas an earlier version of the program was compatible with RELAPS/MOD1. The report describes the force calculation theory, showing the development of a general force equation and the solution of this equation within the RELAPS output structure. To illustrate the calculational method and provide results for discussion, a sample problem is presented. A detailed user manual for the computer program is included as an appendix. 10 refs., 17 figs.
NASA Technical Reports Server (NTRS)
Balakumar, P.; Jeyasingham, Samarasingham
1999-01-01
A program is developed to investigate the linear stability of three-dimensional compressible boundary layer flows over bodies of revolutions. The problem is formulated as a two dimensional (2D) eigenvalue problem incorporating the meanflow variations in the normal and azimuthal directions. Normal mode solutions are sought in the whole plane rather than in a line normal to the wall as is done in the classical one dimensional (1D) stability theory. The stability characteristics of a supersonic boundary layer over a sharp cone with 50 half-angle at 2 degrees angle of attack is investigated. The 1D eigenvalue computations showed that the most amplified disturbances occur around x(sub 2) = 90 degrees and the azimuthal mode number for the most amplified disturbances range between m = -30 to -40. The frequencies of the most amplified waves are smaller in the middle region where the crossflow dominates the instability than the most amplified frequencies near the windward and leeward planes. The 2D eigenvalue computations showed that due to the variations in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the eigenmodes are clustered into isolated confined regions. For some eigenvalues, the eigenfunctions are clustered in two regions. Due to the nonparallel effect in the azimuthal direction, the most amplified disturbances are shifted to 120 degrees compared to 90 degrees for the parallel theory. It is also observed that the nonparallel amplification rates are smaller than that is obtained from the parallel theory.
NASA Astrophysics Data System (ADS)
Kuroda, Takami; Takiwaki, Tomoya; Kotake, Kei
2016-02-01
We present a new multi-dimensional radiation-hydrodynamics code for massive stellar core-collapse in full general relativity (GR). Employing an M1 analytical closure scheme, we solve spectral neutrino transport of the radiation energy and momentum based on a truncated moment formalism. Regarding neutrino opacities, we take into account a baseline set in state-of-the-art simulations, in which inelastic neutrino-electron scattering, thermal neutrino production via pair annihilation, and nucleon-nucleon bremsstrahlung are included. While the Einstein field equations and the spatial advection terms in the radiation-hydrodynamics equations are evolved explicitly, the source terms due to neutrino-matter interactions and energy shift in the radiation moment equations are integrated implicitly by an iteration method. To verify our code, we first perform a series of standard radiation tests with analytical solutions that include the check of gravitational redshift and Doppler shift. A good agreement in these tests supports the reliability of the GR multi-energy neutrino transport scheme. We then conduct several test simulations of core-collapse, bounce, and shock stall of a 15{M}⊙ star in the Cartesian coordinates and make a detailed comparison with published results. Our code performs quite well to reproduce the results of full Boltzmann neutrino transport especially before bounce. In the postbounce phase, our code basically performs well, however, there are several differences that are most likely to come from the insufficient spatial resolution in our current 3D-GR models. For clarifying the resolution dependence and extending the code comparison in the late postbounce phase, we discuss that next-generation Exaflops class supercomputers are needed at least.
NASA Astrophysics Data System (ADS)
Zhou, Zhenggan; Ma, Baoquan; Jiang, Jingtao; Yu, Guang; Liu, Kui; Zhang, Dongmei; Liu, Weiping
2014-10-01
Air-coupled ultrasonic testing (ACUT) technique has been viewed as a viable solution in defect detection of advanced composites used in aerospace and aviation industries. However, the giant mismatch of acoustic impedance in air-solid interface makes the transmission efficiency of ultrasound low, and leads to poor signal-to-noise (SNR) ratio of received signal. The utilisation of signal-processing techniques in non-destructive testing is highly appreciated. This paper presents a wavelet filtering and phase-coded pulse compression hybrid method to improve the SNR and output power of received signal. The wavelet transform is utilised to filter insignificant components from noisy ultrasonic signal, and pulse compression process is used to improve the power of correlated signal based on cross-correction algorithm. For the purpose of reasonable parameter selection, different families of wavelets (Daubechies, Symlet and Coiflet) and decomposition level in discrete wavelet transform are analysed, different Barker codes (5-13 bits) are also analysed to acquire higher main-to-side lobe ratio. The performance of the hybrid method was verified in a honeycomb composite sample. Experimental results demonstrated that the proposed method is very efficient in improving the SNR and signal strength. The applicability of the proposed method seems to be a very promising tool to evaluate the integrity of high ultrasound attenuation composite materials using the ACUT.
Hydrodynamic effects in the atmosphere of variable stars
NASA Technical Reports Server (NTRS)
Davis, C. G., Jr.; Bunker, S. S.
1975-01-01
Numerical models of variable stars are established, using a nonlinear radiative transfer coupled hydrodynamics code. The variable Eddington method of radiative transfer is used. Comparisons are for models of W Virginis, beta Doradus, and eta Aquilae. From these models it appears that shocks are formed in the atmospheres of classical Cepheids as well as W Virginis stars. In classical Cepheids, with periods from 7 to 10 days, the bumps occurring in the light and velocity curves appear as the result of a compression wave that reflects from the star's center. At the head of the outward going compression wave, shocks form in the atmosphere. Comparisons between the hydrodynamic motions in W Virginis and classical Cepheids are made. The strong shocks in W Virginis do not penetrate into the interior as do the compression waves formed in classical Cepheids. The shocks formed in W Virginis stars cause emission lines, while in classical Cepheids the shocks are weaker.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-03-10
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.
NASA Astrophysics Data System (ADS)
Uribe, Andres Cordoba
The mechanical properties of soft biological materials are essential to their physiological function and cannot easily be duplicated by synthetic materials. The study of the mechanical properties of biological materials has lead to the development of new rheological characterization techniques. In the technique called passive microbead rheology, the positional autocorrelation function of a micron-sized bead embedded in a viscoelastic fluid is used to infer the dynamic modulus of the fluid. Single particle microrheology is limited to fluids were the microstructure is much smaller than the size of the probe bead. To overcome this limitation in two-bead microrheology the cross-correlated thermal motion of pairs of tracer particles is used to determine the dynamic modulus. Here we present a time-domain data analysis methodology and generalized Brownian dynamics simulations to examine the effects of inertia, hydrodynamic interaction, compressibility and non-conservative forces in passive microrheology. A type of biological material that has proven specially challenging to characterize are active gels. They are formed by semiflexible polymer filaments driven by motor proteins that convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) to mechanical work and motion. Active gels perform essential functions in living tissue. Here we introduce a single-chain mean-field model to describe the mechanical properties of active gels. We model the semiflexible filaments as bead-spring chains and the molecular motors are accounted for by using a mean-field approach. The level of description of the model includes the end-to-end length and attachment state of the filaments, and the motor-generated forces, as stochastic state variables which evolve according to a proposed differential Chapman-Kolmogorov equation. The model allows accounting for physics that are not available in models that have been postulated on coarser levels of description. Moreover it allows
Picture data compression coder using subband/transform coding with a Lempel-Ziv-based coder
NASA Technical Reports Server (NTRS)
Glover, Daniel R. (Inventor)
1995-01-01
Digital data coders/decoders are used extensively in video transmission. A digitally encoded video signal is separated into subbands. Separating the video into subbands allows transmission at low data rates. Once the data is separated into these subbands it can be coded and then decoded by statistical coders such as the Lempel-Ziv based coder.
Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rost, Martin Christopher
1988-01-01
Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.
Uwaba, Tomoyuki; Tanaka, Kosuke
2001-10-15
To analyze the wire-wrapped fast breeder reactor (FBR) fuel pin bundle deformation under bundle-duct interaction (BDI) conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. A three-dimensional beam element model is used in this code to calculate fuel pin bowing and cladding oval distortion, which are the dominant deformation mechanisms in a fuel pin bundle. In this work, the property of the cladding oval distortion considering the wire-pitch was evaluated experimentally and introduced in the code analysis.The BAMBOO code was validated in this study by using an out-of-pile bundle compression testing apparatus and comparing these results with the code results. It is concluded that BAMBOO reasonably predicts the pin-to-duct clearances in the compression tests by treating the cladding oval distortion as the suppression mechanism to BDI.
NASA Technical Reports Server (NTRS)
Finley, Dennis B.; Karman, Steve L., Jr.
1996-01-01
The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.
Yin, Jun; Yang, Yuwang; Wang, Lei
2016-01-01
Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574
Passy, Jean-Claude; Mac Low, Mordecai-Mark; De Marco, Orsola; Fryer, Chris L.; Diehl, Steven; Rockefeller, Gabriel; Herwig, Falk; Oishi, Jeffrey S.; Bryan, Greg L.
2012-01-01
We use three-dimensional hydrodynamical simulations to study the rapid infall phase of the common envelope (CE) interaction of a red giant branch star of mass equal to 0.88 M{sub Sun} and a companion star of mass ranging from 0.9 down to 0.1 M{sub Sun }. We first compare the results obtained using two different numerical techniques with different resolutions, and find very good agreement overall. We then compare the outcomes of those simulations with observed systems thought to have gone through a CE. The simulations fail to reproduce those systems in the sense that most of the envelope of the donor remains bound at the end of the simulations and the final orbital separations between the donor's remnant and the companion, ranging from 26.8 down to 5.9 R{sub Sun }, are larger than the ones observed. We suggest that this discrepancy vouches for recombination playing an essential role in the ejection of the envelope and/or significant shrinkage of the orbit happening in the subsequent phase.
NASA Astrophysics Data System (ADS)
Delettrez, J. A.; Myatt, J. F.; Yaakobi, B.
2015-11-01
The modeling of the fast-electron transport in the 1-D hydrodynamic code LILAC was modified because of the addition of cross-beam-energy-transfer (CBET) in implosion simulations. Using the old fast-electron with source model CBET results in a shift of the peak of the hard x-ray (HXR) production from the end of the laser pulse, as observed in experiments, to earlier in the pulse. This is caused by a drop in the laser intensity of the quarter-critical surface from CBET interaction at lower densities. Data from simulations with the laser plasma simulation environment (LPSE) code will be used to modify the source algorithm in LILAC. In addition, the transport model in LILAC has been modified to include deviations from the straight-line algorithm and non-specular reflection at the sheath to take into account the scattering from collisions and magnetic fields in the corona. Simulation results will be compared with HXR emissions from both room-temperature plastic and cryogenic target experiments. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Janka, Hans-Thomas
2014-06-01
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M ⊙, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, langErang, of \\bar{\
Assessing the Effects of Data Compression in Simulations Using Physically Motivated Metrics
Laney, Daniel; Langer, Steven; Weber, Christopher; Lindstrom, Peter; Wegener, Al
2014-01-01
This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 3–5X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore » compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less
Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M{sub Sun} progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Janka, Hans-Thomas; Marek, Andreas
2012-09-01
We present the first two-dimensional general relativistic (GR) simulations of stellar core collapse and explosion with the COCONUT hydrodynamics code in combination with the VERTEX solver for energy-dependent, three-flavor neutrino transport, using the extended conformal flatness condition for approximating the space-time metric and a ray-by-ray-plus ansatz to tackle the multi-dimensionality of the transport. For both of the investigated 11.2 and 15 M ⊙ progenitors we obtain successful, though seemingly marginal, neutrino-driven supernova explosions. This outcome and the time evolution of the models basically agree with results previously obtained with the PROMETHEUS hydro solver including an approximative treatment of relativistic effects by a modified Newtonian potential. However, GR models exhibit subtle differences in the neutrinospheric conditions compared with Newtonian and pseudo-Newtonian simulations. These differences lead to significantly higher luminosities and mean energies of the radiated electron neutrinos and antineutrinos and therefore to larger energy-deposition rates and heating efficiencies in the gain layer with favorable consequences for strong nonradial mass motions and ultimately for an explosion. Moreover, energy transfer to the stellar medium around the neutrinospheres through nucleon recoil in scattering reactions of heavy-lepton neutrinos also enhances the mentioned effects. Together with previous pseudo-Newtonian models, the presented relativistic calculations suggest that the treatment of gravity and energy-exchanging neutrino interactions can make differences of even 50%-100% in some quantities and is likely to contribute to a finally successful explosion mechanism on no minor level than hydrodynamical differences between different dimensions.
NASA Astrophysics Data System (ADS)
Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry
2016-05-01
A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).
NASA Technical Reports Server (NTRS)
Jaggi, S.
1993-01-01
A study is conducted to investigate the effects and advantages of data compression techniques on multispectral imagery data acquired by NASA's airborne scanners at the Stennis Space Center. The first technique used was vector quantization. The vector is defined in the multispectral imagery context as an array of pixels from the same location from each channel. The error obtained in substituting the reconstructed images for the original set is compared for different compression ratios. Also, the eigenvalues of the covariance matrix obtained from the reconstructed data set are compared with the eigenvalues of the original set. The effects of varying the size of the vector codebook on the quality of the compression and on subsequent classification are also presented. The output data from the Vector Quantization algorithm was further compressed by a lossless technique called Difference-mapped Shift-extended Huffman coding. The overall compression for 7 channels of data acquired by the Calibrated Airborne Multispectral Scanner (CAMS), with an RMS error of 15.8 pixels was 195:1 (0.41 bpp) and with an RMS error of 3.6 pixels was 18:1 (.447 bpp). The algorithms were implemented in software and interfaced with the help of dedicated image processing boards to an 80386 PC compatible computer. Modules were developed for the task of image compression and image analysis. Also, supporting software to perform image processing for visual display and interpretation of the compressed/classified images was developed.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
Blohm, W
1997-01-01
Lightness algorithms, which have been proposed as a model for human vision, are aimed at recovering surface reflectance in a close approximation. They attempt to separate reflectance data from illumination data by thresholding a spatial derivative of the image intensity. This, however, works only reliable in a world of plane Mondrians. An extension of the classical lightness approach of Land and McCann (1971) to curved surfaces is presented. Assuming smooth surfaces with Lambertian reflection properties and leaving aside occlusions and cast shadows, the separation of those components of the intensity gradient due to reflectance from those due to irradiance is posed as a constraint minimization problem. To do so, two classification operators were introduced which identify potential reflectance and irradiance data using a scale-space filtering approach. Two exemplary applications of the proposed extended lightness algorithm in the field of visual telecommunications are presented: (i) the simulation of more uniformly illuminated videophone portrait scenes to give dynamic range compressed images with a most realistic appearance and (ii) the synthesis of videophone portrait images from model-based coded data with a correct illumination effect. In both applications, the extended lightness algorithm is employed for estimating the reflectance functions at facial surfaces. Results obtained by applying the extended lightness algorithm are compared with results obtained by conventional methods known from literature. PMID:18283002
Blohm, W
1997-01-01
Lightness algorithms, which have been proposed as a model for human vision, are aimed at recovering surface reflectance in a close approximation. They attempt to separate reflectance data from illumination data by thresholding a spatial derivative of the image intensity. This, however, works only reliable in a world of plane Mondrians. An extension of the classical lightness approach of Land and McCann (1971) to curved surfaces is presented. Assuming smooth surfaces with Lambertian reflection properties and leaving aside occlusions and cast shadows, the separation of those components of the intensity gradient due to reflectance from those due to irradiance is posed as a constraint minimization problem. To do so, two classification operators were introduced which identify potential reflectance and irradiance data using a scale-space filtering approach. Two exemplary applications of the proposed extended lightness algorithm in the field of visual telecommunications are presented: (i) the simulation of more uniformly illuminated videophone portrait scenes to give dynamic range compressed images with a most realistic appearance and (ii) the synthesis of videophone portrait images from model-based coded data with a correct illumination effect. In both applications, the extended lightness algorithm is employed for estimating the reflectance functions at facial surfaces. Results obtained by applying the extended lightness algorithm are compared with results obtained by conventional methods known from literature.
Müller, Bernhard; Janka, Hans-Thomas E-mail: bjmuellr@mpa-garching.mpg.de
2014-06-10
Considering six general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 M {sub ☉}, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the VERTEX-COCONUT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies, (E), of ν-bar {sub e} and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M ≳ 10 M {sub ☉} as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of 〈E{sub ν-bar{sub e}}〉 with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10%-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ∼10 kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such 'SASI neutrino chirps' reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50 Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
NASA Astrophysics Data System (ADS)
Badenes, Carlos
2006-02-01
Thanks to Chandra and XMM-Newton, spatially resolved spectroscopy of SNRsin the X-ray band has become a reality. Several impressive data sets forejecta-dominated SNRs can now be found in the archives, the Cas A VLP justbeing one (albeit probably the most spectacular) example. However, it isoften hard to establish quantitative, unambiguous connections between theX-ray observations of SNRs and the dramatic events involved in a corecollapse or thermonuclear SN explosion. The reason for this is that thevery high quality of the data sets generated by Chandra and XMM for thelikes of Cas A, SNR 292.0+1.8, Tycho, and SN 1006 has surpassed our abilityto analyze them. The core of the problem is in the transient nature of theplasmas in SNRs, which results in anintimate relationship between the structure of the ejecta and AM, the SNRdynamics arising from their interaction, and the ensuing X-rayemission. Thus, the ONLY way to understand the X-ray observations ofejecta-dominated SNRs at all levels, from the spatially integrated spectrato the subarcsecond scales that can be resolved by Chandra, is to couplehydrodynamic simulations to nonequilibrium ionization (NEI) calculationsand X-ray spectral codes. I will review the basic ingredients that enterthis kind of calculations, and what are the prospects for using them tounderstand the X-ray emission from the shocked ejecta in young SNRs. Thisunderstanding (when it is possible), can turn SNRs into veritable timemachines, revealing the secrets of the titanic explosions that generatedthem hundreds of years ago.
Sandford, II, Maxwell T.; Handel, Theodore G.; Bradley, Jonathan N.
1998-01-01
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.
Sandford, M.T. II; Handel, T.G.; Bradley, J.N.
1998-07-07
A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.
Chan, A D; Lovely, D F; Hudgins, B
1998-03-01
Muscle activity produces an electrical signal termed the myo-electric signal (MES). The MES is a useful clinical tool, used in diagnostics and rehabilitation. This signal is typically stored in 2 bytes as 12-bit data, sampled at 3 kHz, resulting in a 6 kbyte s-1 storage requirement. Processing MES data requires large bit manipulations and heavy memory storage requirements. Adaptive differential pulse code modulation (ADPCM) is a popular and successful compression technique for speech. Its application to MES would reduce 12-bit data to a 4-bit representation, providing a 3:1 compression. As, in most practical applications, memory is organised in bytes, the realisable compression is 4:1, as pairs of data can be stored in a single byte. The performance of the ADPCM compression technique, using a real-time system at 1 kHz, 2 kHz and 4 kHz sampling rates, is evaluated. The data used include MES from both isometric and dynamic contractions. The percent residual difference (PRD) between an unprocessed and processed MES is used as a performance measure. Errors in computed parameters, such as median frequency and variance, which are used in clinical diagnostics, and waveform features employed in prosthetic control are also used to evaluate the system. The results of the study demonstrate that the ADPCM compression technique is an excellent solution for relieving the data storage requirements of MES both in isometric and dynamic situations. PMID:9684462
Progress in smooth particle hydrodynamics
Wingate, C.A.; Dilts, G.A.; Mandell, D.A.; Crotzer, L.A.; Knapp, C.E.
1998-07-01
Smooth Particle Hydrodynamics (SPH) is a meshless, Lagrangian numerical method for hydrodynamics calculations where calculational elements are fuzzy particles which move according to the hydrodynamic equations of motion. Each particle carries local values of density, temperature, pressure and other hydrodynamic parameters. A major advantage of SPH is that it is meshless, thus large deformation calculations can be easily done with no connectivity complications. Interface positions are known and there are no problems with advecting quantities through a mesh that typical Eulerian codes have. These underlying SPH features make fracture physics easy and natural and in fact, much of the applications work revolves around simulating fracture. Debris particles from impacts can be easily transported across large voids with SPH. While SPH has considerable promise, there are some problems inherent in the technique that have so far limited its usefulness. The most serious problem is the well known instability in tension leading to particle clumping and numerical fracture. Another problem is that the SPH interpolation is only correct when particles are uniformly spaced a half particle apart leading to incorrect strain rates, accelerations and other quantities for general particle distributions. SPH calculations are also sensitive to particle locations. The standard artificial viscosity treatment in SPH leads to spurious viscosity in shear flows. This paper will demonstrate solutions for these problems that they and others have been developing. The most promising is to replace the SPH interpolant with the moving least squares (MLS) interpolant invented by Lancaster and Salkauskas in 1981. SPH and MLS are closely related with MLS being essentially SPH with corrected particle volumes. When formulated correctly, JLS is conservative, stable in both compression and tension, does not have the SPH boundary problems and is not sensitive to particle placement. The other approach to
Radiation Hydrodynamics Test Problems with Linear Velocity Profiles
Hendon, Raymond C.; Ramsey, Scott D.
2012-08-22
As an extension of the works of Coggeshall and Ramsey, a class of analytic solutions to the radiation hydrodynamics equations is derived for code verification purposes. These solutions are valid under assumptions including diffusive radiation transport, a polytropic gas equation of state, constant conductivity, separable flow velocity proportional to the curvilinear radial coordinate, and divergence-free heat flux. In accordance with these assumptions, the derived solution class is mathematically invariant with respect to the presence of radiative heat conduction, and thus represents a solution to the compressible flow (Euler) equations with or without conduction terms included. With this solution class, a quantitative code verification study (using spatial convergence rates) is performed for the cell-centered, finite volume, Eulerian compressible flow code xRAGE developed at Los Alamos National Laboratory. Simulation results show near second order spatial convergence in all physical variables when using the hydrodynamics solver only, consistent with that solver's underlying order of accuracy. However, contrary to the mathematical properties of the solution class, when heat conduction algorithms are enabled the calculation does not converge to the analytic solution.
Uwaba, Tomoyuki; Ito, Masahiro; Ukai, Shigeharu
2004-02-15
To analyze the wire-wrapped fast breeder reactor fuel pin bundle deformation under bundle/duct interaction conditions, the Japan Nuclear Cycle Development Institute has developed the BAMBOO computer code. This code uses the three-dimensional beam element to calculate fuel pin bowing and cladding oval distortion as the primary deformation mechanisms in a fuel pin bundle. The pin dispersion, which is disarrangement of pins in a bundle and would occur during irradiation, was modeled in this code to evaluate its effect on bundle deformation. By applying the contact analysis method commonly used in the finite element method, this model considers the contact conditions at various axial positions as well as the nodal points and can analyze the irregular arrangement of fuel pins with the deviation of the wire configuration.The dispersion model was introduced in the BAMBOO code and verified by using the results of the out-of-pile compression test of the bundle, where the dispersion was caused by the deviation of the wire position. And the effect of the dispersion on the bundle deformation was evaluated based on the analysis results of the code.
Skew resisting hydrodynamic seal
Conroy, William T.; Dietle, Lannie L.; Gobeli, Jeffrey D.; Kalsi, Manmohan S.
2001-01-01
A novel hydrodynamically lubricated compression type rotary seal that is suitable for lubricant retention and environmental exclusion. Particularly, the seal geometry ensures constraint of a hydrodynamic seal in a manner preventing skew-induced wear and provides adequate room within the seal gland to accommodate thermal expansion. The seal accommodates large as-manufactured variations in the coefficient of thermal expansion of the sealing material, provides a relatively stiff integral spring effect to minimize pressure-induced shuttling of the seal within the gland, and also maintains interfacial contact pressure within the dynamic sealing interface in an optimum range for efficient hydrodynamic lubrication and environment exclusion. The seal geometry also provides for complete support about the circumference of the seal to receive environmental pressure, as compared the interrupted character of seal support set forth in U.S. Pat. Nos. 5,873,576 and 6,036,192 and provides a hydrodynamic seal which is suitable for use with non-Newtonian lubricants.
Self-consistent solution of cosmological radiation-hydrodynamics and chemical ionization
Reynolds, Daniel R. Hayes, John C. Paschos, Pascal Norman, Michael L.
2009-10-01
We consider a PDE system comprising compressible hydrodynamics, flux-limited diffusion radiation transport and chemical ionization kinetics in a cosmologically-expanding universe. Under an operator-split framework, the cosmological hydrodynamics equations are solved through the piecewise parabolic method, as implemented in the Enzo community hydrodynamics code. The remainder of the model, including radiation transport, chemical ionization kinetics, and gas energy feedback, form a stiff coupled PDE system, which we solve using a fully-implicit inexact Newton approach, and which forms the crux of this paper. The inner linear Newton systems are solved using a Schur complement formulation, and employ a multigrid-preconditioned conjugate gradient solver for the inner Schur systems. We describe this approach and provide results on a suite of test problems, demonstrating its accuracy, robustness, and scalability to very large problems.
ERIC Educational Resources Information Center
Lafrance, Pierre
1978-01-01
Explores in a non-mathematical treatment some of the hydrodynamical phenomena and forces that affect the operation of ships, especially at high speeds. Discusses the major components of ship resistance such as the different types of drags and ways to reduce them and how to apply those principles for the hovercraft. (GA)
Castor, J I
2003-10-16
The discipline of radiation hydrodynamics is the branch of hydrodynamics in which the moving fluid absorbs and emits electromagnetic radiation, and in so doing modifies its dynamical behavior. That is, the net gain or loss of energy by parcels of the fluid material through absorption or emission of radiation are sufficient to change the pressure of the material, and therefore change its motion; alternatively, the net momentum exchange between radiation and matter may alter the motion of the matter directly. Ignoring the radiation contributions to energy and momentum will give a wrong prediction of the hydrodynamic motion when the correct description is radiation hydrodynamics. Of course, there are circumstances when a large quantity of radiation is present, yet can be ignored without causing the model to be in error. This happens when radiation from an exterior source streams through the problem, but the latter is so transparent that the energy and momentum coupling is negligible. Everything we say about radiation hydrodynamics applies equally well to neutrinos and photons (apart from the Einstein relations, specific to bosons), but in almost every area of astrophysics neutrino hydrodynamics is ignored, simply because the systems are exceedingly transparent to neutrinos, even though the energy flux in neutrinos may be substantial. Another place where we can do ''radiation hydrodynamics'' without using any sophisticated theory is deep within stars or other bodies, where the material is so opaque to the radiation that the mean free path of photons is entirely negligible compared with the size of the system, the distance over which any fluid quantity varies, and so on. In this case we can suppose that the radiation is in equilibrium with the matter locally, and its energy, pressure and momentum can be lumped in with those of the rest of the fluid. That is, it is no more necessary to distinguish photons from atoms, nuclei and electrons, than it is to distinguish
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony
2014-02-01
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.
NASA Astrophysics Data System (ADS)
Lauga, Eric
2016-01-01
Bacteria predate plants and animals by billions of years. Today, they are the world's smallest cells, yet they represent the bulk of the world's biomass and the main reservoir of nutrients for higher organisms. Most bacteria can move on their own, and the majority of motile bacteria are able to swim in viscous fluids using slender helical appendages called flagella. Low-Reynolds number hydrodynamics is at the heart of the ability of flagella to generate propulsion at the micrometer scale. In fact, fluid dynamic forces impact many aspects of bacteriology, ranging from the ability of cells to reorient and search their surroundings to their interactions within mechanically and chemically complex environments. Using hydrodynamics as an organizing framework, I review the biomechanics of bacterial motility and look ahead to future challenges.
Hydrodynamic models of a Cepheid atmosphere
NASA Technical Reports Server (NTRS)
Karp, A. H.
1975-01-01
Instead of computing a large number of coarsely zoned hydrodynamic models covering the entire atmospheric instability strip, the author computed a single model as well as computer limitations allow. The implicit hydrodynamic code of Kutter and Sparks was modified to include radiative transfer effects in optically thin zones.
Moran, B
2005-06-02
We present test problems that can be used to check the hydrodynamic implementation in computer codes designed to model the implosion of a National Ignition Facility (NIF) capsule. The problems are simplified, yet one of them is three-dimensional. It consists of a nearly-spherical incompressible imploding shell subjected to an exponentially decaying pressure on its outer surface. We present a semi-analytic solution for the time-evolution of that shell with arbitrary small three-dimensional perturbations on its inner and outer surfaces. The perturbations on the shell surfaces are intended to model the imperfections that are created during capsule manufacturing.
FLY: a Tree Code for Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.
FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.
Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Hawley, John; Simon, Jake; Stone, James; Gardiner, Thomas; Teuben, Peter
2015-05-01
Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.
Hydrodynamic effects on coalescence.
Dimiduk, Thomas G.; Bourdon, Christopher Jay; Grillet, Anne Mary; Baer, Thomas A.; de Boer, Maarten Pieter; Loewenberg, Michael; Gorby, Allen D.; Brooks, Carlton, F.
2006-10-01
The goal of this project was to design, build and test novel diagnostics to probe the effect of hydrodynamic forces on coalescence dynamics. Our investigation focused on how a drop coalesces onto a flat surface which is analogous to two drops coalescing, but more amenable to precise experimental measurements. We designed and built a flow cell to create an axisymmetric compression flow which brings a drop onto a flat surface. A computer-controlled system manipulates the flow to steer the drop and maintain a symmetric flow. Particle image velocimetry was performed to confirm that the control system was delivering a well conditioned flow. To examine the dynamics of the coalescence, we implemented an interferometry capability to measure the drainage of the thin film between the drop and the surface during the coalescence process. A semi-automated analysis routine was developed which converts the dynamic interferogram series into drop shape evolution data.
NASA Astrophysics Data System (ADS)
Colgate, S. A.
1981-11-01
The physics as well as astrophysics of the supernova (SN) phenomenon are illustrated with the appropriate numbers. The explosion of a star, a supernova, occurs at the end of its evolution when the nuclear fuel in its core is almost, or completely, consumed. The star may explode due to a small residual thermonuclear detonation, type I SN, or it may collapse, type I and type II SN, leaving a neutron star remnant. The type I progenitor is thought to be an old accreting white dwarf, 1.4 interior mass, with a close companion star. A type II SN is thought to be a massive young star, 6 to 10 interior mass. The mechanism of explosion is still a challenge to model, being the most extreme conditions of matter and hydrodynamics that occur presently and excessively in the universe.
Mueller, Bernhard; Janka, Hans-Thomas; Marek, Andreas E-mail: thj@mpa-garching.mpg.de
2013-03-20
We present a detailed theoretical analysis of the gravitational wave (GW) signal of the post-bounce evolution of core-collapse supernovae (SNe), employing for the first time relativistic, two-dimensional explosion models with multi-group, three-flavor neutrino transport based on the ray-by-ray-plus approximation. The waveforms reflect the accelerated mass motions associated with the characteristic evolutionary stages that were also identified in previous works: a quasi-periodic modulation by prompt post-shock convection is followed by a phase of relative quiescence before growing amplitudes signal violent hydrodynamical activity due to convection and the standing accretion shock instability during the accretion period of the stalled shock. Finally, a high-frequency, low-amplitude variation from proto-neutron star (PNS) convection below the neutrinosphere appears superimposed on the low-frequency trend associated with the aspherical expansion of the SN shock after the onset of the explosion. Relativistic effects in combination with detailed neutrino transport are shown to be essential for quantitative predictions of the GW frequency evolution and energy spectrum, because they determine the structure of the PNS surface layer and its characteristic g-mode frequency. Burst-like high-frequency activity phases, correlated with sudden luminosity increase and spectral hardening of electron (anti-)neutrino emission for some 10 ms, are discovered as new features after the onset of the explosion. They correspond to intermittent episodes of anisotropic accretion by the PNS in the case of fallback SNe. We find stronger signals for more massive progenitors with large accretion rates. The typical frequencies are higher for massive PNSs, though the time-integrated spectrum also strongly depends on the model dynamics.
Flash Kα radiography of laser-driven solid sphere compression for fast ignition
NASA Astrophysics Data System (ADS)
Sawada, H.; Lee, S.; Shiroto, T.; Nagatomo, H.; Arikawa, Y.; Nishimura, H.; Ueda, T.; Shigemori, K.; Sunahara, A.; Ohnishi, N.; Beg, F. N.; Theobald, W.; Pérez, F.; Patel, P. K.; Fujioka, S.
2016-06-01
Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm2. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.
A hydrodynamic approach to cosmology - Methodology
NASA Technical Reports Server (NTRS)
Cen, Renyue
1992-01-01
The present study describes an accurate and efficient hydrodynamic code for evolving self-gravitating cosmological systems. The hydrodynamic code is a flux-based mesh code originally designed for engineering hydrodynamical applications. A variety of checks were performed which indicate that the resolution of the code is a few cells, providing accuracy for integral energy quantities in the present simulations of 1-3 percent over the whole runs. Six species (H I, H II, He I, He II, He III) are tracked separately, and relevant ionization and recombination processes, as well as line and continuum heating and cooling, are computed. The background radiation field is simultaneously determined in the range 1 eV to 100 keV, allowing for absorption, emission, and cosmological effects. It is shown how the inevitable numerical inaccuracies can be estimated and to some extent overcome.
pyro: Python-based tutorial for computational methods for hydrodynamics
NASA Astrophysics Data System (ADS)
Zingale, Michael
2015-07-01
pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.
Data compression for satellite images
NASA Technical Reports Server (NTRS)
Chen, P. H.; Wintz, P. A.
1976-01-01
An efficient data compression system is presented for satellite pictures and two grey level pictures derived from satellite pictures. The compression techniques take advantages of the correlation between adjacent picture elements. Several source coding methods are investigated. Double delta coding is presented and shown to be the most efficient. Both predictive differential quantizing technique and double delta coding can be significantly improved by applying a background skipping technique. An extension code is constructed. This code requires very little storage space and operates efficiently. Simulation results are presented for various coding schemes and source codes.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
Sequential neural text compression.
Schmidhuber, J; Heil, S
1996-01-01
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.
Compression ratio effect on methane HCCI combustion
Aceves, S. M.; Pitz, W.; Smith, J. R.; Westbrook, C.
1998-09-29
We have used the HCT (Hydrodynamics, Chemistry and Transport) chemical kinetics code to simulate HCCI (homogeneous charge compression ignition) combustion of methane-air mixtures. HCT is applied to explore the ignition timing, bum duration, NO_{x} production, gross indicated efficiency and gross IMEP of a supercharged engine (3 atm. Intake pressure) with 14:1, 16:l and 18:1 compression ratios at 1200 rpm. HCT has been modified to incorporate the effect of heat transfer and to calculate the temperature that results from mixing the recycled exhaust with the fresh mixture. This study uses a single control volume reaction zone that varies as a function of crank angle. The ignition process is controlled by adjusting the intake equivalence ratio and the residual gas trapping (RGT). RGT is internal exhaust gas recirculation which recycles both thermal energy and combustion product species. Adjustment of equivalence ratio and RGT is accomplished by varying the timing of the exhaust valve closure in either 2-stroke or 4-stroke engines. Inlet manifold temperature is held constant at 300 K. Results show that, for each compression ratio, there is a range of operational conditions that show promise of achieving the control necessary to vary power output while keeping indicated efficiency above 50% and NO_{x} levels below 100 ppm. HCT results are also compared with a set of recent experimental data for natural gas.
Supernova hydrodynamics experiments using the Nova laser
Remington, B.A.; Glendinning, S.G.; Estabrook, K.; Wallace, R.J.; Rubenchik, A.; Kane, J.; Arnett, D.; Drake, R.P.; McCray, R.
1997-04-01
We are developing experiments using the Nova laser to investigate two areas of physics relevant to core-collapse supernovae (SN): (1) compressible nonlinear hydrodynamic mixing and (2) radiative shock hydrodynamics. In the former, we are examining the differences between the 2D and 3D evolution of the Rayleigh-Taylor instability, an issue critical to the observables emerging from SN in the first year after exploding. In the latter, we are investigating the evolution of a colliding plasma system relevant to the ejecta-stellar wind interactions of the early stages of SN remnant formation. The experiments and astrophysical implications are discussed.
Hydrodynamics from Landau initial conditions
Sen, Abhisek; Gerhard, Jochen; Torrieri, Giorgio; Read jr, Kenneth F.; Wong, Cheuk-Yin
2015-01-01
We investigate ideal hydrodynamic evolution, with Landau initial conditions, both in a semi-analytical 1+1D approach and in a numerical code incorporating event-by-event variation with many events and transverse density inhomogeneities. The object of the calculation is to test how fast would a Landau initial condition transition to a commonly used boost-invariant expansion. We show that the transition to boost-invariant flow occurs too late for realistic setups, with corrections of O (20 - 30%) expected at freezeout for most scenarios. Moreover, the deviation from boost-invariance is correlated with both transverse flow and elliptic flow, with the more highly transversely flowing regions also showing the most violation of boost invariance. Therefore, if longitudinal flow is not fully developed at the early stages of heavy ion collisions, 2+1 dimensional hydrodynamics is inadequate to extract transport coefficients of the quark-gluon plasma. Based on [1, 2
Testing hydrodynamics schemes in galaxy disc simulations
NASA Astrophysics Data System (ADS)
Few, C. G.; Dobbs, C.; Pettitt, A.; Konstandin, L.
2016-08-01
We examine how three fundamentally different numerical hydrodynamics codes follow the evolution of an isothermal galactic disc with an external spiral potential. We compare an adaptive mesh refinement code (RAMSES), a smoothed particle hydrodynamics code (SPHNG), and a volume-discretized mesh-less code (GIZMO). Using standard refinement criteria, we find that RAMSES produces a disc that is less vertically concentrated and does not reach such high densities as the SPHNG or GIZMO runs. The gas surface density in the spiral arms increases at a lower rate for the RAMSES simulations compared to the other codes. There is also a greater degree of substructure in the SPHNG and GIZMO runs and secondary spiral arms are more pronounced. By resolving the Jeans length with a greater number of grid cells, we achieve more similar results to the Lagrangian codes used in this study. Other alterations to the refinement scheme (adding extra levels of refinement and refining based on local density gradients) are less successful in reducing the disparity between RAMSES and SPHNG/GIZMO. Although more similar, SPHNG displays different density distributions and vertical mass profiles to all modes of GIZMO (including the smoothed particle hydrodynamics version). This suggests differences also arise which are not intrinsic to the particular method but rather due to its implementation. The discrepancies between codes (in particular, the densities reached in the spiral arms) could potentially result in differences in the locations and time-scales for gravitational collapse, and therefore impact star formation activity in more complex galaxy disc simulations.
White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification
NASA Astrophysics Data System (ADS)
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-01
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
NASA Technical Reports Server (NTRS)
Barnsley, Michael F.; Sloan, Alan D.
1989-01-01
Fractals are geometric or data structures which do not simplify under magnification. Fractal Image Compression is a technique which associates a fractal to an image. On the one hand, the fractal can be described in terms of a few succinct rules, while on the other, the fractal contains much or all of the image information. Since the rules are described with less bits of data than the image, compression results. Data compression with fractals is an approach to reach high compression ratios for large data streams related to images. The high compression ratios are attained at a cost of large amounts of computation. Both lossless and lossy modes are supported by the technique. The technique is stable in that small errors in codes lead to small errors in image data. Applications to the NASA mission are discussed.
Disruptive Innovation in Numerical Hydrodynamics
Waltz, Jacob I.
2012-09-06
We propose the research and development of a high-fidelity hydrodynamic algorithm for tetrahedral meshes that will lead to a disruptive innovation in the numerical modeling of Laboratory problems. Our proposed innovation has the potential to reduce turnaround time by orders of magnitude relative to Advanced Simulation and Computing (ASC) codes; reduce simulation setup costs by millions of dollars per year; and effectively leverage Graphics Processing Unit (GPU) and future Exascale computing hardware. If successful, this work will lead to a dramatic leap forward in the Laboratory's quest for a predictive simulation capability.
Hydrodynamics of micropipette aspiration.
Drury, J L; Dembo, M
1999-01-01
The dynamics of human neutrophils during micropipette aspiration are frequently analyzed by approximating these cells as simple slippery droplets of viscous fluid. Here, we present computations that reveal the detailed predictions of the simplest and most idealized case of such a scheme; namely, the case where the fluid of the droplet is homogeneous and Newtonian, and the surface tension of the droplet is constant. We have investigated the behavior of this model as a function of surface tension, droplet radius, viscosity, aspiration pressure, and pipette radius. In addition, we have tabulated a dimensionless factor, M, which can be utilized to calculate the apparent viscosity of the slippery droplet. Computations were carried out using a low Reynolds number hydrodynamics transport code based on the finite-element method. Although idealized and simplistic, we find that the slippery droplet model predicts many observed features of neutrophil aspiration. However, there are certain features that are not observed in neutrophils. In particular, the model predicts dilation of the membrane past the point of being continuous, as well as a reentrant jet at high aspiration pressures. PMID:9876128
Ravishankar, C., Hughes Network Systems, Germantown, MD
1998-05-08
coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.
Scaling supernova hydrodynamics to the laboratory
Kane, J O; Remington, B A; Arnett, D; Fryxell, B A; Drake, R P
1998-11-10
Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, they are attempting to rigorously scale the physics of the laboratory in supernova. The scaling of hydrodynamics on microscopic laser scales to hydrodynamics on the SN-size scales is presented and requirements established. Initial results were reported in [1]. Next the appropriate conditions are generated on the NOVA laser. 10-15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth, due to the Richtmyer-Meshkov instability and to the Rayleigh-Taylor instability as the interface decelerates is generated. This scales the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike bubble velocities using potential flow theory and Ott thin shell theory is presented, as well as a study of 2D vs. 3D difference in growth at the He-H interface of Sn 1987A.
ECG data compression by modeling.
Madhukar, B.; Murthy, I. S.
1992-01-01
This paper presents a novel algorithm for data compression of single lead Electrocardiogram (ECG) data. The method is based on Parametric modeling of the Discrete Cosine Transformed ECG signal. Improved high frequency reconstruction is achieved by separately modeling the low and the high frequency regions of the transformed signal. Differential Pulse Code Modulation is applied on the model parameters to obtain a further increase in the compression. Compression ratios up to 1:40 were achieved without significant distortion. PMID:1482940
Niederhaus, John; Ranjan, Devesh; Anderson, Mark; Oakley, Jason; Bonazza, Riccardo; Greenough, Jeff
2005-05-15
Experiments studying the compression and unstable growth of a dense spherical bubble in a gaseous medium subjected to a strong planar shock wave (2.8 < M < 3.4) are performed in a vertical shock tube. The test gas is initially contained in a free-falling spherical soap-film bubble, and the shocked bubble is imaged using planar laser diagnostics. Concurrently, simulations are carried out using a compressible hydrodynamics code in r-z axisymmetric geometry.Experiments and computations indicate the formation of characteristic vortical structures in the post-shock flow, due to Richtmyer-Meshkov and Kelvin-Helmholtz instabilities, and smaller-scale vortices due to secondary effects. Inconsistencies between experimental and computational results are examined, and the usefulness of the current axisymmetric approach is evaluated.
IKT for quantum hydrodynamic equations
NASA Astrophysics Data System (ADS)
Tessarotto, Massimo; Ellero, Marco; Nicolini, Piero
2007-11-01
A striking feature of standard quantum mechanics (SQM) is its analogy with classical fluid dynamics. In fact, it is well-known that the Schr"odinger equation is equivalent to a closed set of partial differential equations for suitable real-valued functions of position and time (denoted as quantum fluid fields) [Madelung, 1928]. In particular, the corresponding quantum hydrodynamic equations (QHE) can be viewed as the equations of a classical compressible and non-viscous fluid, endowed with potential velocity and quantized velocity circulation. In this reference, an interesting theoretical problem, in its own right, is the construction of an inverse kinetic theory (IKT) for such a type of fluids. In this note we intend to investigate consequences of the IKT recently formulated for QHE [M.Tessarotto et al., Phys. Rev. A 75, 012105 (2007)]. In particular a basic issue is related to the definition of the quantum fluid fields.
Hydrodynamic simulations with the Godunov smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Murante, G.; Borgani, S.; Brunino, R.; Cha, S.-H.
2011-10-01
We present results based on an implementation of the Godunov smoothed particle hydrodynamics (GSPH), originally developed by Inutsuka, in the GADGET-3 hydrodynamic code. We first review the derivation of the GSPH discretization of the equations of moment and energy conservation, starting from the convolution of these equations with the interpolating kernel. The two most important aspects of the numerical implementation of these equations are (a) the appearance of fluid velocity and pressure obtained from the solution of the Riemann problem between each pair of particles, and (b) the absence of an artificial viscosity term. We carry out three different controlled hydrodynamical three-dimensional tests, namely the Sod shock tube, the development of Kelvin-Helmholtz instabilities in a shear-flow test and the 'blob' test describing the evolution of a cold cloud moving against a hot wind. The results of our tests confirm and extend in a number of aspects those recently obtained by Cha, Inutsuka & Nayakshin: (i) GSPH provides a much improved description of contact discontinuities, with respect to smoothed particle hydrodynamics (SPH), thus avoiding the appearance of spurious pressure forces; (ii) GSPH is able to follow the development of gas-dynamical instabilities, such as the Kevin-Helmholtz and the Rayleigh-Taylor ones; (iii) as a result, GSPH describes the development of curl structures in the shear-flow test and the dissolution of the cold cloud in the 'blob' test. Besides comparing the results of GSPH with those from standard SPH implementations, we also discuss in detail the effect on the performances of GSPH of changing different aspects of its implementation: choice of the number of neighbours, accuracy of the interpolation procedure to locate the interface between two fluid elements (particles) for the solution of the Riemann problem, order of the reconstruction for the assignment of variables at the interface, choice of the limiter to prevent oscillations of
Supernova-relevant hydrodynamic instability experiment on the Nova laser
Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Castor, J.; Rubenchik, A.; Berning, M.
1996-02-12
Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. On quite a separate front, the detrimental effect of hydrodynamic instabilities in inertial confinement fusion (ICF) has long been known. Tools from both areas are being tested on a common project. At Lawrence Livermore National Laboratory (LLNL), the Nova Laser is being used in scaled laboratory experiments of hydrodynamic mixing under supernova-relevant conditions. Numerical simulations of the experiments are being done, using hydrodynamics codes at the Laboratory, and astrophysical codes successfully used to model the hydrodynamics of supernovae. A two-layer package composed of Cu and CH{sub 2} with a single mode sinusoidal 1D perturbation at the interface, shocked by indirect laser drive from the Cu side of the package, produced significant Rayleigh-Taylor (RT) growth in the nonlinear regime. The scale and gross structure of the growth was successfully modeled, by mapping an early-time simulation done with 1D HYADES, a radiation transport code, into 2D CALE, a LLNL hydrodynamics code. The HYADES result was also mapped in 2D into the supernova code PROMETHEUS, which was also able to reproduce the scale and gross structure of the growth.
Shock Propagation and Instability Structures in Compressed Silica Aerogels
Howard, W M; Molitoris, J D; DeHaven, M R; Gash, A E; Satcher, J H
2002-05-30
We have performed a series of experiments examining shock propagation in low density aerogels. High-pressure ({approx}100 kbar) shock waves are produced by detonating high explosives. Radiography is used to obtain a time sequence imaging of the shocks as they enter and traverse the aerogel. We compress the aerogel by impinging shocks waves on either one or both sides of an aerogel slab. The shock wave initially transmitted to the aerogel is very narrow and flat, but disperses and curves as it propagates. Optical images of the shock front reveal the initial formation of a hot dense region that cools and evolves into a well-defined microstructure. Structures observed in the shock front are examined in the framework of hydrodynamic instabilities generated as the shock traverses the low-density aerogel. The primary features of shock propagation are compared to simulations, which also include modeling the detonation of the high explosive, with a 2-D Arbitrary Lagrange Eulerian hydrodynamics code The code includes a detailed thermochemical equation of state and rate law kinetics. We will present an analysis of the data from the time resolved imaging diagnostics and form a consistent picture of the shock transmission, propagation and instability structure.
Castro-Chavez, Fernando
2014-01-01
Objective The objective of this article is to demonstrate that the genetic code can be studied and represented in a 3-D Sphered Cube for bioinformatics and for education by using the graphical help of the ancient “Book of Changes” or I Ching for the comparison, pair by pair, of the three basic characteristics of nucleotides: H-bonds, molecular structure, and their tautomerism. Methods The source of natural biodiversity is the high plasticity of the genetic code, analyzable with a reverse engineering of its 2-D and 3-D representations (here illustrated), but also through the classical 64-hexagrams of the ancient I Ching, as if they were the 64-codons or words of the genetic code. Results In this article, the four elements of the Yin/Yang were found by correlating the 3×2=6 sets of Cartesian comparisons of the mentioned properties of nucleic acids, to the directionality of their resulting blocks of codons grouped according to their resulting amino acids and/or functions, integrating a 384-codon Sphered Cube whose function is illustrated by comparing six brain peptides and a promoter of osteoblasts from Humans versus Neanderthal, as well as to Negadi’s work on the importance of the number 384 within the genetic code. Conclusions Starting with the codon/anticodon correlation of Nirenberg, published in full here for the first time, and by studying the genetic code and its 3-D display, the buffers of reiteration within codons codifying for the same amino acid, displayed the two long (binary number one) and older Yin/Yang arrows that travel in opposite directions, mimicking the parental DNA strands, while annealing to the two younger and broken (binary number zero) Yin/Yang arrows, mimicking the new DNA strands; the graphic analysis of the of the genetic code and its plasticity was helpful to compare compatible sequences (human compatible to human versus neanderthal compatible to neanderthal), while further exploring the wondrous biodiversity of nature for
Circumstellar Hydrodynamics and Spectral Radiation in ALGOLS
NASA Astrophysics Data System (ADS)
Terrell, Dirk Curtis
1994-01-01
Algols are the remnants of binary systems that have undergone large scale mass transfer. This dissertation presents the results of the coupling of a hydrodynamical model and a radiative model of the flow of gas from the inner Lagrangian point. The hydrodynamical model is a fully Lagrangian, three-dimensional scheme with a novel treatment of viscosity and an implementation of the smoothed particle hydrodynamics method to compute pressure gradients. Viscosity is implemented by allowing particles within a specified interaction length to share momentum. The hydrodynamical model includes a provision for computing the self-gravity of the disk material, although it is not used in the present application to Algols. Hydrogen line profiles and equivalent widths computed with a code by Drake and Ulrich are compared with observations of both short and long period Algols. More sophisticated radiative transfer computations are done with the escape probability code of Ko and Kallman which includes the spectral lines of thirteen elements. The locations and velocities of the gas particles, and the viscous heating from the hydro program are supplied to the radiative transfer program, which computes the equilibrium temperature of the gas and generates its emission spectrum. Intrinsic line profiles are assumed to be delta functions and are properly Doppler shifted and summed for gas particles that are not eclipsed by either star. Polarization curves are computed by combining the hydro program with the Wilson-Liou polarization program. Although the results are preliminary, they show that polarization observations show great promise for studying circumstellar matter.
RECENT RESULTS OF RADIATION HYDRODYNAMICS AND TURBULENCE EXPERIMENTS IN CYLINDRICAL GEOMETRY.
Magelssen G. R.; Scott, J. M.; Batha, S. H.; Holmes, R. L.; Lanier, N. E.; Tubbs, D. L.; Elliott, N. E.; Dunne, A. M.; Rothman, S.; Parker, K. W.; Youngs, D.
2001-01-01
Cylindrical implosion experiments at the University of Rochester laser facility, OMEGA, were performed to study radiation hydrodynamics and compressible turbulence in convergent geometry. Laser beams were used to directly drive a cylinder with either a gold (AU) or dichloropolystyrene (C6H8CL2) marker layer placed between a solid CH ablator and a foam cushion. When the cylinder is imploded the Richtmyer-Meshkov instability and convergence cause the marker layer to increase in thickness. Marker thickness measurements were made by x-ray backlighting along the cylinder axis. Experimental results of the effect of surface roughness will be presented. Computational results with an AMR code are in good agreement with the experimental results from targets with the roughest surface. Computational results suggest that marker layer 'end effects' and bowing increase the effective thickness of the marker layer at lower levels of roughness.
Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
Wu, Yueying; Jia, Kebin; Gao, Guandong
2016-01-01
In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741
Fu, C.Y.; Petrich, L.I.
1997-03-25
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace`s equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image. 16 figs.
Fu, Chi-Yung; Petrich, Loren I.
1997-01-01
An image is compressed by identifying edge pixels of the image; creating a filled edge array of pixels each of the pixels in the filled edge array which corresponds to an edge pixel having a value equal to the value of a pixel of the image array selected in response to the edge pixel, and each of the pixels in the filled edge array which does not correspond to an edge pixel having a value which is a weighted average of the values of surrounding pixels in the filled edge array which do correspond to edge pixels; and subtracting the filled edge array from the image array to create a difference array. The edge file and the difference array are then separately compressed and transmitted or stored. The original image is later reconstructed by creating a preliminary array in response to the received edge file, and adding the preliminary array to the received difference array. Filling is accomplished by solving Laplace's equation using a multi-grid technique. Contour and difference file coding techniques also are described. The techniques can be used in a method for processing a plurality of images by selecting a respective compression approach for each image, compressing each of the images according to the compression approach selected, and transmitting each of the images as compressed, in correspondence with an indication of the approach selected for the image.
Relativistic viscous hydrodynamics for high energy heavy ion collisions
NASA Astrophysics Data System (ADS)
Vredevoogd, Joshua Aaron
It has been over a decade since the first experimental data from gold nuclei collisions at the Relativistic Heavy Ion Collider suggested hydrodynamic behavior. Early ideal hydrodynamical models ignored the large longitudinal gradients that imply viscosity playing an important role in the dynamics. In addition, at that time, much less was known about the equation of state predicted by lattice calculations of quantum chromodynamics and the effects of late (dilute) stage rescattering were handled within the hydrodynamic framework. This dissertation presents a three-dimensional viscous hydrodynamics code with a realistic equation of state coupled consistently to a hadron resonance gas calculation. This code is capable of making significant comparisons to experimental data as part of an effort to learn about the structure of experimental constraints on the microscopic interactions of dense, hot quark matter.
Quasinormal modes of the polytropic hydrodynamic vortex
NASA Astrophysics Data System (ADS)
Oliveira, Leandro A.; Cardoso, Vitor; Crispino, Luís C. B.
2015-07-01
Analogue systems are a powerful instrument to investigate and understand in a controlled setting many general-relativistic effects. Here, we focus on superradiant-triggered instabilities and quasinormal modes. We consider a compressible hydrodynamic vortex characterized by a polytropic equation of state, the polytropic hydrodynamic vortex, a purely circulating system with an ergoregion but no event horizon. We compute the quasinormal modes of this system numerically with different methods, finding excellent agreement between them. When the fluid velocity is larger than the speed of sound, an ergoregion appears in the effective spacetime, triggering an "ergoregion instability." We study the details of the instability for the polytropic vortex, and in particular find analytic expressions for the marginally stable configuration.
Making FLASH an Open Code for the Academic High-Energy Density Physics Community
NASA Astrophysics Data System (ADS)
Lamb, D. Q.; Couch, S. M.; Dubey, A.; Gopal, S.; Graziani, C.; Lee, D.; Weide, K.; Xia, G.
2010-11-01
High-energy density physics (HEDP) is an active and growing field of research. DOE has recently decided to make FLASH a code for the academic HEDP community. FLASH is a modular and extensible compressible spatially adaptive hydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of existing advanced computer architectures, and has a broad user base. A rigorous software maintenance process allows the code to operate simultaneously in production and development modes. We summarize the work we are doing to add HEDP capabilities to FLASH. We are adding (1) Spitzer conductivity, (2) super time-stepping to handle the disparity between diffusion and advection time scales, and (3) a description of electrons, ions, and radiation (in the diffusion approximation) by 3 temperatures (3T) to both the hydrodynamics and the MHD solvers. We are also adding (4) ray tracing, (5) laser energy deposition, and (6) a multi-species equation of state incorporating ionization to the hydrodynamics solver; and (7) Hall MHD, and (8) the Biermann battery term to the MHD solver.
Hybrid magneto-hydrodynamic simulation of a driven FRC
Rahman, H. U. Wessel, F. J.; Binderbauer, M. W.; Qerushi, A.; Rostoker, N.; Conti, F.; Ney, P.
2014-03-15
We simulate a field-reversed configuration (FRC), produced by an “inductively driven” FRC experiment; comprised of a central-flux coil and exterior-limiter coil. To account for the plasma kinetic behavior, a standard 2-dimensional magneto-hydrodynamic code is modified to preserve the azimuthal, two-fluid behavior. Simulations are run for the FRC's full-time history, sufficient to include: acceleration, formation, current neutralization, compression, and decay. At start-up, a net ion current develops that modifies the applied-magnetic field forming closed-field lines and a region of null-magnetic field (i.e., a FRC). After closed-field lines form, ion-electron drag increases the electron current, canceling a portion of the ion current. The equilibrium is lost as the total current eventually dissipates. The time evolution and magnitudes of the computed current, ion-rotation velocity, and plasma temperature agree with the experiments, as do the rigid-rotor-like, radial-profiles for the density and axial-magnetic field [cf. Conti et al. Phys. Plasmas 21, 022511 (2014)].
Scaling supernova hydrodynamics to the laboratory
Kane, J.O.
1999-06-01
Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane et al., Astrophys. J.478, L75 (1997) The Nova laser is used to shock two-layer targets, producing Richtmyer-Meshkov (RM) and Rayleigh-Taylor (RT) instabilities at the interfaces between the layers, analogous to instabilities seen at the interfaces of SN 1987A. Because the hydrodynamics in the laser experiments at intermediate times (3-40 ns) and in SN 1987A at intermediate times (5 s-10{sup 4} s) are well described by the Euler equations, the hydrodynamics scale between the two regimes. The experiments are modeled using the hydrodynamics codes HYADES and CALE, and the supernova code PROMETHEUS, thus serving as a benchmark for PROMETHEUS. Results of the experiments and simulations are presented. Analysis of the spike and bubble velocities in the experiment using potential flow theory and a modified Ott thin shell theory is presented. A numerical study of 2D vs. 3D differences in instability growth at the O-He and He-H interface of SN 1987A, and the design for analogous laser experiments are presented. We discuss further work to incorporate more features of the SN in the experiments, including spherical geometry, multiple layers and density gradients. Past and ongoing work in laboratory and laser astrophysics is reviewed, including experimental work on supernova remnants (SNRs). A numerical study of RM instability in SNRs is presented.
Scaling supernova hydrodynamics to the laboratory
Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Bazan, G.; Drake, R.P.; Fryxell, B.A.; Teyssier, R.
1999-05-01
Supernova (SN) 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. Initial results were reported in J. Kane {ital et al.} [Astrophys. J. {bold 478}, L75 (1997) and B. A. Remington {ital et al.}, Phys. Plasmas {bold 4}, 1994 (1997)]. The Nova laser is used to generate a 10{endash}15 Mbar shock at the interface of a two-layer planar target, which triggers perturbation growth due to the Richtmyer{endash}Meshkov instability, and to the Rayleigh{endash}Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few {times}10{sup 3}s. The scaling of hydrodynamics on microscopic laser scales to the SN-size scales is presented. The experiment is modeled using the hydrodynamics codes HYADES [J. T. Larson and S. M. Lane, J. Quant. Spect. Rad. Trans. {bold 51}, 179 (1994)] and CALE [R. T. Barton, {ital Numerical Astrophysics} (Jones and Bartlett, Boston, 1985), pp. 482{endash}497], and the supernova code PROMETHEUS [P. R. Woodward and P. Collela, J. Comp. Phys. {bold 54}, 115 (1984)]. Results of the experiments and simulations are presented. Analysis of the spike-and-bubble velocities using potential flow theory and Ott thin-shell theory is presented, as well as a study of 2D versus 3D differences in perturbation growth at the He-H interface of SN 1987A.
Resurgence in extended hydrodynamics
NASA Astrophysics Data System (ADS)
Aniceto, Inês; Spaliński, Michał
2016-04-01
It has recently been understood that the hydrodynamic series generated by the Müller-Israel-Stewart theory is divergent and that this large-order behavior is consistent with the theory of resurgence. Furthermore, it was observed that the physical origin of this is the presence of a purely damped nonhydrodynamic mode. It is very interesting to ask whether this picture persists in cases where the spectrum of nonhydrodynamic modes is richer. We take the first step in this direction by considering the simplest hydrodynamic theory which, instead of the purely damped mode, contains a pair of nonhydrodynamic modes of complex conjugate frequencies. This mimics the pattern of black brane quasinormal modes which appear on the gravity side of the AdS/CFT description of N =4 supersymmetric Yang-Mills plasma. We find that the resulting hydrodynamic series is divergent in a way consistent with resurgence and precisely encodes information about the nonhydrodynamic modes of the theory.
Consistent Hydrodynamics for Phase Field Crystals.
Heinonen, V; Achim, C V; Kosterlitz, J M; Ying, See-Chen; Lowengrub, J; Ala-Nissila, T
2016-01-15
We use the amplitude expansion in the phase field crystal framework to formulate an approach where the fields describing the microscopic structure of the material are coupled to a hydrodynamic velocity field. The model is shown to reduce to the well-known macroscopic theories in appropriate limits, including compressible Navier-Stokes and wave equations. Moreover, we show that the dynamics proposed allows for long wavelength phonon modes and demonstrate the theory numerically showing that the elastic excitations in the system are relaxed through phonon emission. PMID:26824543
NASA Astrophysics Data System (ADS)
Nicolai, Ph.; Feugeas, J. L.; Touati, M.; Breil, J.; Dubroca, B.; Nguyen-Buy, T.; Ribeyre, X.; Tikhonchuk, V.; Gus'kov, S.
2014-10-01
An issue to be addressed in Inertial Confinement Fusion (ICF) is the detailed description of the kinetic transport of relativistic or non-thermal electrons generated by laser within the time and space scales of the imploded target hydrodynamics. We have developed at CELIA the model M1, a fast and reduced kinetic model for relativistic electron transport. The latter has been implemented into the 2D radiation hydrodynamic code CHIC. In the framework of the Shock Ignition (SI) scheme, it has been shown in simplified conditions that the energy transferred by the non-thermal electrons from the corona to the compressed shell of an ICF target could be an important mechanism for the creation of ablation pressure. Nevertheless, in realistic configurations, taking the density profile and the electron energy spectrum into account, the target has to be carefully designed to avoid deleterious effects on compression efficiency. In addition, the electron energy deposition may modify the laser-driven shock formation and its propagation through the target. The non-thermal electron effects on the shock propagation will be analyzed in a realistic configuration.
Dispersive hydrodynamics: Preface
NASA Astrophysics Data System (ADS)
Biondini, G.; El, G. A.; Hoefer, M. A.; Miller, P. D.
2016-10-01
This Special Issue on Dispersive Hydrodynamics is dedicated to the memory and work of G.B. Whitham who was one of the pioneers in this field of physical applied mathematics. Some of the papers appearing here are related to work reported on at the workshop "Dispersive Hydrodynamics: The Mathematics of Dispersive Shock Waves and Applications" held in May 2015 at the Banff International Research Station. This Preface provides a broad overview of the field and summaries of the various contributions to the Special Issue, placing them in a unified context.
Synchronization via Hydrodynamic Interactions
NASA Astrophysics Data System (ADS)
Kendelbacher, Franziska; Stark, Holger
2013-12-01
An object moving in a viscous fluid creates a flow field that influences the motion of neighboring objects. We review examples from nature in the microscopic world where such hydrodynamic interactions synchronize beating or rotating filaments. Bacteria propel themselves using a bundle of rotating helical filaments called flagella which have to be synchronized in phase. Other micro-organisms are covered with a carpet of smaller filaments called cilia on their surfaces. They beat highly synchronized so that metachronal waves propagate along the cell surfaces. We explore both examples with the help of simple model systems and identify generic properties for observing synchronization by hydrodynamic interactions.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
Simple Waves in Ideal Radiation Hydrodynamics
Johnson, B M
2008-09-03
In the dynamic diffusion limit of radiation hydrodynamics, advection dominates diffusion; the latter primarily affects small scales and has negligible impact on the large scale flow. The radiation can thus be accurately regarded as an ideal fluid, i.e., radiative diffusion can be neglected along with other forms of dissipation. This viewpoint is applied here to an analysis of simple waves in an ideal radiating fluid. It is shown that much of the hydrodynamic analysis carries over by simply replacing the material sound speed, pressure and index with the values appropriate for a radiating fluid. A complete analysis is performed for a centered rarefaction wave, and expressions are provided for the Riemann invariants and characteristic curves of the one-dimensional system of equations. The analytical solution is checked for consistency against a finite difference numerical integration, and the validity of neglecting the diffusion operator is demonstrated. An interesting physical result is that for a material component with a large number of internal degrees of freedom and an internal energy greater than that of the radiation, the sound speed increases as the fluid is rarefied. These solutions are an excellent test for radiation hydrodynamic codes operating in the dynamic diffusion regime. The general approach may be useful in the development of Godunov numerical schemes for radiation hydrodynamics.
New TVD Hydro Code for Modeling Disk-Planet Interactions
NASA Astrophysics Data System (ADS)
Mudryk, Lawrence; Murray, Norman
2004-06-01
We present test simulations of a TVD hydrodynamical code designed with very few calculations per time step. The code is to be used to preform simulations of proto-planet interactions within gas disks in early solar systems.
Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.
2015-08-02
One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental
Three-dimensional Hybrid Continuum-Atomistic Simulations for Multiscale Hydrodynamics
Wijesinghe, S; Hornung, R; Garcia, A; Hadjiconstantinou, N
2004-04-15
We present an adaptive mesh and algorithmic refinement (AMAR) scheme for modeling multi-scale hydrodynamics. The AMAR approach extends standard conservative adaptive mesh refinement (AMR) algorithms by providing a robust flux-based method for coupling an atomistic fluid representation to a continuum model. The atomistic model is applied locally in regions where the continuum description is invalid or inaccurate, such as near strong flow gradients and at fluid interfaces, or when the continuum grid is refined to the molecular scale. The need for such ''hybrid'' methods arises from the fact that hydrodynamics modeled by continuum representations are often under-resolved or inaccurate while solutions generated using molecular resolution globally are not feasible. In the implementation described herein, Direct Simulation Monte Carlo (DSMC) provides an atomistic description of the flow and the compressible two-fluid Euler equations serve as our continuum-scale model. The AMR methodology provides local grid refinement while the algorithm refinement feature allows the transition to DSMC where needed. The continuum and atomistic representations are coupled by matching fluxes at the continuum-atomistic interfaces and by proper averaging and interpolation of data between scales. Our AMAR application code is implemented in C++ and is built upon the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) framework developed at Lawrence Livermore National Laboratory. SAMRAI provides the parallel adaptive gridding algorithm and enables the coupling between the continuum and atomistic methods.
Analytical model for ramp compression
NASA Astrophysics Data System (ADS)
Xue, Quanxi; Jiang, Shaoen; Wang, Zhebin; Wang, Feng; Hu, Yun; Ding, Yongkun
2016-08-01
An analytical ramp compression model for condensed matter, which can provide explicit solutions for isentropic compression flow fields, is reported. A ramp compression experiment can be easily designed according to the capability of the loading source using this model. Specifically, important parameters, such as the maximum isentropic region width, material properties, profile of the pressure pulse, and the pressure pulse duration can be reasonably allocated or chosen. To demonstrate and study this model, laser-direct-driven ramp compression experiments and code simulation are performed successively, and the factors influencing the accuracy of the model are studied. The application and simulation show that this model can be used as guidance in the design of a ramp compression experiment. However, it is verified that further optimization work is required for a precise experimental design.
Context-Aware Image Compression
Chan, Jacky C. K.; Mahjoubfar, Ata; Chen, Claire L.; Jalali, Bahram
2016-01-01
We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling. PMID:27367904
Supernova Hydrodynamics on the Omega Laser
R. Paul Drake
2004-01-16
(B204)The fundamental motivation for our work is that supernovae are not well understood. Recent observations have clarified the depth of our ignorance, by producing observed phenomena that current theory and computer simulations cannot reproduce. Such theories and simulations involve, however, a number of physical mechanisms that have never been studied in isolation. We perform experiments, in compressible hydrodynamics and radiation hydrodynamics, relevant to supernovae and supernova remnants. These experiments produce phenomena in the laboratory that are believed, based on simulations, to be important to astrophysics but that have not been directly observed in either the laboratory or in an astrophysical system. During the period of this grant, we have focused on the scaling of an astrophysically relevant, radiative-precursor shock, on preliminary studies of collapsing radiative shocks, and on the multimode behavior and the three-dimensional, deeply nonlinear evolution of the Rayleigh-Taylor (RT) instability at a decelerating, embedded interface. These experiments required strong compression and decompression, strong shocks (Mach {approx}10 or greater), flexible geometries, and very smooth laser beams, which means that the 60-beam Omega laser is the only facility capable of carrying out this program.
SPHGR: Smoothed-Particle Hydrodynamics Galaxy Reduction
NASA Astrophysics Data System (ADS)
Thompson, Robert
2015-02-01
SPHGR (Smoothed-Particle Hydrodynamics Galaxy Reduction) is a python based open-source framework for analyzing smoothed-particle hydrodynamic simulations. Its basic form can run a baryonic group finder to identify galaxies and a halo finder to identify dark matter halos; it can also assign said galaxies to their respective halos, calculate halo & galaxy global properties, and iterate through previous time steps to identify the most-massive progenitors of each halo and galaxy. Data about each individual halo and galaxy is collated and easy to access. SPHGR supports a wide range of simulations types including N-body, full cosmological volumes, and zoom-in runs. Support for multiple SPH code outputs is provided by pyGadgetReader (ascl:1411.001), mainly Gadget (ascl:0003.001) and TIPSY (ascl:1111.015).
Protostellar Collapse Using Multigroup Radiation Hydrodynamics
NASA Astrophysics Data System (ADS)
Vaytet, N.; Chabrier, G.; Audit, E.; Commerçon, B.; Masson, J.; González, M.; Ferguson, J.; Delahaye, F.
2015-10-01
Many simulations of protostellar collapse make use of a grey treatment of radiative transfer coupled to the hydrodynamics. However, interstellar gas and dust opacities present large variations as a function of frequency. In this paper, we present multigroup radiation hydrodynamics simulations of the collapse of a spherically symmetric cloud and the formation of the first and second Larson cores. We have used a non-ideal gas equation of state as well as an extensive set of spectral opacities. Small differences between grey and multigroup simulations were observed. The first and second core accretion shocks were found to be super- and sub-critical, respectively. Varying the initial size and mass of the parent cloud had little impact on the core properties (especially for the second core). We finally present early results from 3D simulations that were performed using the RAMSES code.
Hydrodynamically Lubricated Rotary Shaft Having Twist Resistant Geometry
Dietle, Lannie; Gobeli, Jeffrey D.
1993-07-27
A hydrodynamically lubricated squeeze packing type rotary shaft with a cross-sectional geometry suitable for pressurized lubricant retention is provided which, in the preferred embodiment, incorporates a protuberant static sealing interface that, compared to prior art, dramatically improves the exclusionary action of the dynamic sealing interface in low pressure and unpressurized applications by achieving symmetrical deformation of the seal at the static and dynamic sealing interfaces. In abrasive environments, the improved exclusionary action results in a dramatic reduction of seal and shaft wear, compared to prior art, and provides a significant increase in seal life. The invention also increases seal life by making higher levels of initial compression possible, compared to prior art, without compromising hydrodynamic lubrication; this added compression makes the seal more tolerant of compression set, abrasive wear, mechanical misalignment, dynamic runout, and manufacturing tolerances, and also makes hydrodynamic seals with smaller cross-sections more practical. In alternate embodiments, the benefits enumerated above are achieved by cooperative configurations of the seal and the gland which achieve symmetrical deformation of the seal at the static and dynamic sealing interfaces. The seal may also be configured such that predetermined radial compression deforms it to a desired operative configuration, even through symmetrical deformation is lacking.
Computational brittle fracture using smooth particle hydrodynamics
Mandell, D.A.; Wingate, C.A.; Schwalbe, L.A.
1996-10-01
We are developing statistically based, brittle-fracture models and are implementing them into hydrocodes that can be used for designing systems with components of ceramics, glass, and/or other brittle materials. Because of the advantages it has simulating fracture, we are working primarily with the smooth particle hydrodynamics code SPBM. We describe a new brittle fracture model that we have implemented into SPBM. To illustrate the code`s current capability, we have simulated a number of experiments. We discuss three of these simulations in this paper. The first experiment consists of a brittle steel sphere impacting a plate. The experimental sphere fragment patterns are compared to the calculations. The second experiment is a steel flyer plate in which the recovered steel target crack patterns are compared to the calculated crack patterns. We also briefly describe a simulation of a tungsten rod impacting a heavily confined alumina target, which has been recently reported on in detail.
Reexamination of quantum data compression and relative entropy
NASA Astrophysics Data System (ADS)
Kaltchenko, Alexei
2008-08-01
Schumacher and Westmoreland [Phys. Rev. A 64, 42304 (2001)] have established a quantum analog of a well-known classical information theory result on a role of relative entropy as a measure of nonoptimality in (classical) data compression. In this paper, we provide an alternative simple and constructive proof of this result by constructing quantum compression codes (schemes) from classical data compression codes. Moreover, as the quantum data compression or coding task can be effectively reduced to a (quasi)classical one, we show that relevant results from classical information theory and data compression become applicable and therefore can be extended to the quantum domain.
Syndrome source coding and its universal generalization
NASA Technical Reports Server (NTRS)
Ancheta, T. C., Jr.
1975-01-01
A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.
Magnetic field compression of an accelerated compact toroid in a conical drift section
NASA Astrophysics Data System (ADS)
Horton, R. D.; Hwang, D. Q.; Evans, R. W.; Liu, F.; Klauser, R.; Umont, Glenn
2008-11-01
There are numerous applications for spheromak-like compact toroids (SCTs)with high plasma density and internal magnetic field. Previous experiments have demonstrated density and field compression of SCTs using coaxial conical electrodes [1,2]. For some applications, however, use of a central electrode may not be practical, and compression must be performed by tapering the outer electrode alone. A tapered conical electrode has been added to the CTIX device to measure magnetic field compression in this geometry. The absence of a center electrode allows magnetic field to be measured via magnetic probes at an adjustable range of axial positions, or by conventional recessed probes on the outer electrode at fixed positions. The field data serves as a benchmark for a smoothed-particle hydrodynamics (SPH) code currently under development. Results will be used to optimize compression cone geometry for the best conversion of SCT kinetic energy into thermal and magnetic energy. [1] J. H. Hammer, et al., PRL 61, 2843 (1988) [2] A.W. Molvik et al., PRL 66, 165 (1991)
Hydrodynamics of Turning Flocks.
Yang, Xingbo; Marchetti, M Cristina
2015-12-18
We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks.
Hydrodynamics of Turning Flocks.
Yang, Xingbo; Marchetti, M Cristina
2015-12-18
We present a hydrodynamic model of flocking that generalizes the familiar Toner-Tu equations to incorporate turning inertia of well-polarized flocks. The continuum equations controlled by only two dimensionless parameters, orientational inertia and alignment strength, are derived by coarse-graining the inertial spin model recently proposed by Cavagna et al. The interplay between orientational inertia and bend elasticity of the flock yields anisotropic spin waves that mediate the propagation of turning information throughout the flock. The coupling between spin-current density to the local vorticity field through a nonlinear friction gives rise to a hydrodynamic mode with angular-dependent propagation speed at long wavelengths. This mode becomes unstable as a result of the growth of bend and splay deformations augmented by the spin wave, signaling the transition to complex spatiotemporal patterns of continuously turning and swirling flocks. PMID:26722945
Fluctuations in relativistic causal hydrodynamics
NASA Astrophysics Data System (ADS)
Kumar, Avdhesh; Bhatt, Jitesh R.; Mishra, Ananta P.
2014-05-01
Formalism to calculate the hydrodynamic fluctuations by applying the Onsager theory to the relativistic Navier-Stokes equation is already known. In this work, we calculate hydrodynamic fluctuations within the framework of the second order hydrodynamics of Müller, Israel and Stewart and its generalization to the third order. We have also calculated the fluctuations for several other causal hydrodynamical equations. We show that the form for the Onsager-coefficients and form of the correlation functions remain the same as those obtained by the relativistic Navier-Stokes equation and do not depend on any specific model of hydrodynamics. Further we numerically investigate evolution of the correlation function using the one dimensional boost-invariant (Bjorken) flow. We compare the correlation functions obtained using the causal hydrodynamics with the correlation function for the relativistic Navier-Stokes equation. We find that the qualitative behavior of the correlation functions remains the same for all the models of the causal hydrodynamics.
NASA Astrophysics Data System (ADS)
Marcus, Philip S.; Pei, Suyang; Jiang, Chung-Hsiang; Barranco, Joseph A.; Hassanzadeh, Pedram; Lecoanet, Daniel
2015-07-01
There is considerable interest in hydrodynamic instabilities in dead zones of protoplanetary disks as a mechanism for driving angular momentum transport and as a source of particle-trapping vortices to mix chondrules and incubate planetesimal formation. We present simulations with a pseudo-spectral anelastic code and with the compressible code Athena, showing that stably stratified flows in a shearing, rotating box are violently unstable and produce space-filling, sustained turbulence dominated by large vortices with Rossby numbers of order ˜0.2–0.3. This Zombie Vortex Instability (ZVI) is observed in both codes and is triggered by Kolmogorov turbulence with Mach numbers less than ˜0.01. It is a common view that if a given constant density flow is stable, then stable vertical stratification should make the flow even more stable. Yet, we show that sufficient vertical stratification can be unstable to ZVI. ZVI is robust and requires no special tuning of boundary conditions, or initial radial entropy or vortensity gradients (though we have studied ZVI only in the limit of infinite cooling time). The resolution of this paradox is that stable stratification allows for a new avenue to instability: baroclinic critical layers. ZVI has not been seen in previous studies of flows in rotating, shearing boxes because those calculations frequently lacked vertical density stratification and/or sufficient numerical resolution. Although we do not expect appreciable angular momentum transport from ZVI in the small domains in this study, we hypothesize that ZVI in larger domains with compressible equations may lead to angular transport via spiral density waves.
NASA Astrophysics Data System (ADS)
Marcus, Philip S.; Pei, Suyang; Jiang, Chung-Hsiang; Barranco, Joseph A.; Hassanzadeh, Pedram; Lecoanet, Daniel
2015-07-01
There is considerable interest in hydrodynamic instabilities in dead zones of protoplanetary disks as a mechanism for driving angular momentum transport and as a source of particle-trapping vortices to mix chondrules and incubate planetesimal formation. We present simulations with a pseudo-spectral anelastic code and with the compressible code Athena, showing that stably stratified flows in a shearing, rotating box are violently unstable and produce space-filling, sustained turbulence dominated by large vortices with Rossby numbers of order ˜0.2-0.3. This Zombie Vortex Instability (ZVI) is observed in both codes and is triggered by Kolmogorov turbulence with Mach numbers less than ˜0.01. It is a common view that if a given constant density flow is stable, then stable vertical stratification should make the flow even more stable. Yet, we show that sufficient vertical stratification can be unstable to ZVI. ZVI is robust and requires no special tuning of boundary conditions, or initial radial entropy or vortensity gradients (though we have studied ZVI only in the limit of infinite cooling time). The resolution of this paradox is that stable stratification allows for a new avenue to instability: baroclinic critical layers. ZVI has not been seen in previous studies of flows in rotating, shearing boxes because those calculations frequently lacked vertical density stratification and/or sufficient numerical resolution. Although we do not expect appreciable angular momentum transport from ZVI in the small domains in this study, we hypothesize that ZVI in larger domains with compressible equations may lead to angular transport via spiral density waves.
Compressing subbanded image data with Lempel-Ziv-based coders
NASA Technical Reports Server (NTRS)
Glover, Daniel; Kwatra, S. C.
1993-01-01
A method of improving the compression of image data using Lempel-Ziv-based coding is presented. Image data is first processed with a simple transform, such as the Walsh Hadamard Transform, to produce subbands. The subbanded data can be rounded to eight bits or it can be quantized for higher compression at the cost of some reduction in the quality of the reconstructed image. The data is then run-length coded to take advantage of the large runs of zeros produced by quantization. Compression results are presented and contrasted with a subband compression method using quantization followed by run-length coding and Huffman coding. The Lempel-Ziv-based coding in conjunction with run-length coding produces the best compression results at the same reconstruction quality (compared with the Huffman-based coding) on the image data used.
NASA Technical Reports Server (NTRS)
Hilbert, E. E.; Lee, J.; Rice, R. F.; Schlutsmeyer, A. P.
1981-01-01
Compressing technique calculates activity estimator for each segment of image line. Estimator is used in conjunction with allowable bits per line, N, to determine number of bits necessary to code each segment and which segments can tolerate truncation. Preprocessed line data are then passed to adaptive variable-length coder, which selects optimum transmission code. Method increases capacity of broadcast and cable television transmissions and helps reduce size of storage medium for video and digital audio recordings.
Combining Hydrodynamic and Evolution Calculations of Rotating Stars
NASA Astrophysics Data System (ADS)
Deupree, R. G.
1996-12-01
Rotation has two primary effects on stellar evolutionary models: the direct influence on the model structure produced by the rotational terms, and the indirect influence produced by rotational instabilities which redistribute angular momentum and composition inside the model. Using a two dimensional, fully implicit finite difference code, I can follow events on both evolutionary and hydrodynamic timescales, thus allowing the simulation of both effects. However, there are several issues concerning how to integrate the results from hydrodynamic runs into evolutionary runs that must be examined. The schemes I have devised for the integration of the hydrodynamic simulations into evolutionary calculations are outlined, and the positive and negative features summarized. The practical differences among the various schemes are small, and a successful marriage between hydrodynamic and evolution calculations is possible.
Blaedel, Kenneth L.; Davis, Pete J.; Landram, Charles S.
2000-01-01
A saw having a self-pumped hydrodynamic blade guide or bearing for retaining the saw blade in a centered position in the saw kerf (width of cut made by the saw). The hydrodynamic blade guide or bearing utilizes pockets or grooves incorporated into the sides of the blade. The saw kerf in the workpiece provides the guide or bearing stator surface. Both sides of the blade entrain cutting fluid as the blade enters the kerf in the workpiece, and the trapped fluid provides pressure between the blade and the workpiece as an inverse function of the gap between the blade surface and the workpiece surface. If the blade wanders from the center of the kerf, then one gap will increase and one gap will decrease and the consequent pressure difference between the two sides of the blade will cause the blade to re-center itself in the kerf. Saws using the hydrodynamic blade guide or bearing have particular application in slicing slabs from boules of single crystal materials, for example, as well as for cutting other difficult to saw materials such as ceramics, glass, and brittle composite materials.
Hydrodynamics of fossil fishes
Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert
2014-01-01
From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377
Hydrodynamics of insect spermatozoa
NASA Astrophysics Data System (ADS)
Pak, On Shun; Lauga, Eric
2010-11-01
Microorganism motility plays important roles in many biological processes including reproduction. Many microorganisms propel themselves by propagating traveling waves along their flagella. Depending on the species, propagation of planar waves (e.g. Ceratium) and helical waves (e.g. Trichomonas) were observed in eukaryotic flagellar motion, and hydrodynamic models for both were proposed in the past. However, the motility of insect spermatozoa remains largely unexplored. An interesting morphological feature of such cells, first observed in Tenebrio molitor and Bacillus rossius, is the double helical deformation pattern along the flagella, which is characterized by the presence of two superimposed helical flagellar waves (one with a large amplitude and low frequency, and the other with a small amplitude and high frequency). Here we present the first hydrodynamic investigation of the locomotion of insect spermatozoa. The swimming kinematics, trajectories and hydrodynamic efficiency of the swimmer are computed based on the prescribed double helical deformation pattern. We then compare our theoretical predictions with experimental measurements, and explore the dependence of the swimming performance on the geometric and dynamical parameters.
Hydrodynamics of fossil fishes.
Fletcher, Thomas; Altringham, John; Peakall, Jeffrey; Wignall, Paul; Dorrell, Robert
2014-08-01
From their earliest origins, fishes have developed a suite of adaptations for locomotion in water, which determine performance and ultimately fitness. Even without data from behaviour, soft tissue and extant relatives, it is possible to infer a wealth of palaeobiological and palaeoecological information. As in extant species, aspects of gross morphology such as streamlining, fin position and tail type are optimized even in the earliest fishes, indicating similar life strategies have been present throughout their evolutionary history. As hydrodynamical studies become more sophisticated, increasingly complex fluid movement can be modelled, including vortex formation and boundary layer control. Drag-reducing riblets ornamenting the scales of fast-moving sharks have been subjected to particularly intense research, but this has not been extended to extinct forms. Riblets are a convergent adaptation seen in many Palaeozoic fishes, and probably served a similar hydrodynamic purpose. Conversely, structures which appear to increase skin friction may act as turbulisors, reducing overall drag while serving a protective function. Here, we examine the diverse adaptions that contribute to drag reduction in modern fishes and review the few attempts to elucidate the hydrodynamics of extinct forms. PMID:24943377
NASA Technical Reports Server (NTRS)
1996-01-01
Optivision developed two PC-compatible boards and associated software under a Goddard Space Flight Center Small Business Innovation Research grant for NASA applications in areas such as telerobotics, telesciences and spaceborne experimentation. From this technology, the company used its own funds to develop commercial products, the OPTIVideo MPEG Encoder and Decoder, which are used for realtime video compression and decompression. They are used in commercial applications including interactive video databases and video transmission. The encoder converts video source material to a compressed digital form that can be stored or transmitted, and the decoder decompresses bit streams to provide high quality playback.
Yaakobi, B.; Boehly, T. R.; Sangster, T. C.; Meyerhofer, D. D.; Remington, B. A.; Allen, P. G.; Pollaine, S. M.; Lorenzana, H. E.; Lorenz, K. T.; Hawreliak, J. A.
2008-06-15
The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to {approx}0.75 Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.
Yaakobi, B.; Boehly, T.R.; Sangster, T.C.; Meyerhofer, D.D.; Remington, B.A.; Allen, P.G.; Pollaine, S.M.; Lorenzana, H.E.; Lorenz, K.T.; Hawreliak, J.A.
2008-07-21
The use of in situ extended x-ray absorption fine structure (EXAFS) for characterizing nanosecond laser-shocked vanadium, titanium, and iron has recently been demonstrated. These measurements are extended to laser-driven, quasi-isentropic compression experiments (ICE). The radiation source (backlighter) for EXAFS in all of these experiments is obtained by imploding a spherical target on the OMEGA laser [T. R. Boehly et al., Rev. Sci. Instrum. 66, 508 (1995)]. Isentropic compression (where the entropy is kept constant) enables to reach high compressions at relatively low temperatures. The absorption spectra are used to determine the temperature and compression in a vanadium sample quasi-isentropically compressed to pressures of up to ~0.75 Mbar. The ability to measure the temperature and compression directly is unique to EXAFS. The drive pressure is calibrated by substituting aluminum for the vanadium and interferometrically measuring the velocity of the back target surface by the velocity interferometer system for any reflector (VISAR). The experimental results obtained by EXAFS and VISAR agree with each other and with the simulations of a hydrodynamic code. The role of a shield to protect the sample from impact heating is studied. It is shown that the shield produces an initial weak shock that is followed by a quasi-isentropic compression at a relatively low temperature. The role of radiation heating from the imploding target as well as from the laser-absorption region is studied. The results show that in laser-driven ICE, as compared with laser-driven shocks, comparable compressions can be achieved at lower temperatures. The EXAFS results show important details not seen in the VISAR results.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Nonlinear hydrodynamics of cosmological sheets. 1: Numerical techniques and tests
NASA Astrophysics Data System (ADS)
Anninos, Wenbo Y.; Norman, Michael J.
1994-07-01
We present the numerical techniques and tests used to construct and validate a computer code designed to study the multidimensional nonlinear hydrodynamics of large-scale sheet structures in the universe, especially the fragmentation of such structures under various instabilities. This code is composed of two codes, the hydrodynamical code ZEUS-2D and a particle-mesh code. The ZEUS-2D code solves the hydrodynamical equations in two dimensions using explicit Eulerian finite-difference techniques, with modifications made to incorporate the expansion of the universe and the gas cooling due to Compton scattering, bremsstrahlung, and hydrogen and helium cooling. The particle-mesh code solves the equation of motion for the collisionless dark matter. The code uses two-dimensional Cartesian coordinates with a nonuniform grid in one direction to provide high resolution for the sheet structures. A series of one-dimensional and two-dimensional linear perturbation tests are presented which are designed to test the hydro solver and the Poisson solver with and without the expansion of the universe. We also present a radiative shock wave test which is designed to ensure the code's capability to handle radiative cooling properly. And finally a series of one-dimensional Zel'dovich pancake tests used to test the dark matter code and the hydro solver in the nonlinear regime are discussed and compared with the results of Bond et al. (1984) and Shapiro & Struck-Marcell (1985). Overall, the code is shown to produce accurate and stable results, which provide us a powerful tool to further our studies.
NASA Technical Reports Server (NTRS)
Shapiro, Wilbur
1991-01-01
The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.
Impact modeling with Smooth Particle Hydrodynamics
Stellingwerf, R.F.; Wingate, C.A.
1993-07-01
Smooth Particle Hydrodynamics (SPH) can be used to model hypervelocity impact phenomena via the addition of a strength of materials treatment. SPH is the only technique that can model such problems efficiently due to the combination of 3-dimensional geometry, large translations of material, large deformations, and large void fractions for most problems of interest. This makes SPH an ideal candidate for modeling of asteroid impact, spacecraft shield modeling, and planetary accretion. In this paper we describe the derivation of the strength equations in SPH, show several basic code tests, and present several impact test cases with experimental comparisons.
Hydrodynamic Simulations of Contact Binaries
NASA Astrophysics Data System (ADS)
Kadam, Kundan; Clayton, Geoffrey C.; Frank, Juhan; Marcello, Dominic; Motl, Patrick M.; Staff, Jan E.
2015-01-01
The motivation for our project is the peculiar case of the 'red nova" V1309 Sco which erupted in September 2008. The progenitor was, in fact, a contact binary system. We are developing a simulation of contact binaries, so that their formation, structural, and merger properties could be studied using hydrodynamics codes. The observed transient event was the disruption of the secondary star by the primary, and their subsequent merger into one star; hence to replicate this behavior, we need a core-envelope structure for both the stars. We achieve this using a combination of Self Consistant Field (SCF) technique and composite polytropes, also known as bipolytropes. So far we have been able to generate close binaries with various mass ratios. Another consequence of using bipolytropes is that according to theoretical calculations, the radius of a star should expand when the core mass fraction exceeds a critical value, resulting in interesting consequences in a binary system. We present some initial results of these simulations.
Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A
2011-11-04
The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALE-AMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCX-II will explore the process of bubble and droplet formation (two-phase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beam-to-target energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALE-AMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective Cahn-Hilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related
The moving mesh code SHADOWFAX
NASA Astrophysics Data System (ADS)
Vandenbroucke, B.; De Rijcke, S.
2016-07-01
We introduce the moving mesh code SHADOWFAX, which can be used to evolve a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. The code is written in C++ and its source code is made available to the scientific community under the GNU Affero General Public Licence. We outline the algorithm and the design of our implementation, and demonstrate its validity through the results of a set of basic test problems, which are also part of the public version. We also compare SHADOWFAX with a number of other publicly available codes using different hydrodynamical integration schemes, illustrating the advantages and disadvantages of the moving mesh technique.
Low torque hydrodynamic lip geometry for rotary seals
Dietle, Lannie L.; Schroeder, John E.
2015-07-21
A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.
Nonlinear Generalized Hydrodynamic Wave Equations in Strongly Coupled Dusty Plasmas
Veeresha, B. M.; Sen, A.; Kaw, P. K.
2008-09-07
A set of nonlinear equations for the study of low frequency waves in a strongly coupled dusty plasma medium is derived using the phenomenological generalized hydrodynamic (GH) model and is used to study the modulational stability of dust acoustic waves to parallel perturbations. Dust compressibility contributions arising from strong Coulomb coupling effects are found to introduce significant modifications in the threshold and range of the instability domain.
MUFASA: galaxy formation simulations with meshless hydrodynamics
NASA Astrophysics Data System (ADS)
Davé, Romeel; Thompson, Robert; Hopkins, Philip F.
2016-11-01
We present the MUFASA suite of cosmological hydrodynamic simulations, which employs the GIZMO meshless finite mass (MFM) code including H2-based star formation, nine-element chemical evolution, two-phase kinetic outflows following scalings from the Feedback in Realistic Environments zoom simulations, and evolving halo mass-based quenching. Our fiducial (50 h-1 Mpc)3 volume is evolved to z = 0 with a quarter billion elements. The predicted galaxy stellar mass functions (GSMFs) reproduces observations from z = 4 → 0 to ≲ 1.2σ in cosmic variance, providing an unprecedented match to this key diagnostic. The cosmic star formation history and stellar mass growth show general agreement with data, with a strong archaeological downsizing trend such that dwarf galaxies form the majority of their stars after z ˜ 1. We run 25 and 12.5 h-1 Mpc volumes to z = 2 with identical feedback prescriptions, the latter resolving all hydrogen-cooling haloes, and the three runs display fair resolution convergence. The specific star formation rates broadly agree with data at z = 0, but are underpredicted at z ˜ 2 by a factor of 3, re-emphasizing a longstanding puzzle in galaxy evolution models. We compare runs using MFM and two flavours of smoothed particle hydrodynamics, and show that the GSMF is sensitive to hydrodynamics methodology at the ˜×2 level, which is sub-dominant to choices for parametrizing feedback.
Compression research on the REINAS Project
NASA Technical Reports Server (NTRS)
Rosen, Eric; Macy, William; Montague, Bruce R.; Pi-Sunyer, Carles; Spring, Jim; Kulp, David; Long, Dean; Langdon, Glen, Jr.; Pang, Alex; Wittenbrink, Craig M.
1995-01-01
We present approaches to integrating data compression technology into a database system designed to support research of air, sea, and land phenomena of interest to meteorology, oceanography, and earth science. A key element of the Real-Time Environmental Information Network and Analysis System (REINAS) system is the real-time component: to provide data as soon as acquired. Compression approaches being considered for REINAS include compression of raw data on the way into the database, compression of data produced by scientific visualization on the way out of the database, compression of modeling results, and compression of database query results. These compression needs are being incorporated through client-server, API, utility, and application code development.
Molecular Hydrodynamics from Memory Kernels.
Lesnicki, Dominika; Vuilleumier, Rodolphe; Carof, Antoine; Rotenberg, Benjamin
2016-04-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as t^{-3/2}. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, which is at odds with incompressible hydrodynamics predictions. Lastly, we discuss the various contributions to the friction, the associated time scales, and the crossover between the molecular and hydrodynamic regimes upon increasing the solute radius. PMID:27104730
Erlich, Yaniv; Gordon, Assaf; Brand, Michael; Hannon, Gregory J.; Mitra, Partha P.
2011-01-01
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the ’traditional’ compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting. PMID:21451737
MAESTRO: An Adaptive Low Mach Number Hydrodynamics Algorithm for Stellar Flows
NASA Astrophysics Data System (ADS)
Nonaka, Andrew; Almgren, A. S.; Bell, J. B.; Malone, C. M.; Zingale, M.
2010-01-01
Many astrophysical phenomena are highly subsonic, requiring specialized numerical methods suitable for long-time integration. We present MAESTRO, a low Mach number stellar hydrodynamics code that can be used to simulate long-time, low-speed flows that would be prohibitively expensive to model using traditional compressible codes. MAESTRO is based on an equation set that we have derived using low Mach number asymptotics; this equation set does not explicitly track acoustic waves and thus allows a significant increase in the time step. MAESTRO is suitable for two- and three-dimensional local atmospheric flows as well as three-dimensional full-star flows, and uses adaptive mesh refinement (AMR) to locally refine grids in regions of interest. Our initial scientific applications include the convective phase of Type Ia supernovae and Type I X-ray Bursts on neutron stars. The work at LBNL was supported by the SciDAC Program of the DOE Office of Advanced Scientific Computing Research under the DOE under contract No. DE-AC02-05CH11231. The work at Stony Brook was supported by the DOE/Office of Nuclear Physics, grant No. DE-FG02-06ER41448. We made use of the Jaguar via a DOE INCITE allocation at the OLCF at ORNL and Franklin at NERSC at LBNL.
Load responsive hydrodynamic bearing
Kalsi, Manmohan S.; Somogyi, Dezso; Dietle, Lannie L.
2002-01-01
A load responsive hydrodynamic bearing is provided in the form of a thrust bearing or journal bearing for supporting, guiding and lubricating a relatively rotatable member to minimize wear thereof responsive to relative rotation under severe load. In the space between spaced relatively rotatable members and in the presence of a liquid or grease lubricant, one or more continuous ring shaped integral generally circular bearing bodies each define at least one dynamic surface and a plurality of support regions. Each of the support regions defines a static surface which is oriented in generally opposed relation with the dynamic surface for contact with one of the relatively rotatable members. A plurality of flexing regions are defined by the generally circular body of the bearing and are integral with and located between adjacent support regions. Each of the flexing regions has a first beam-like element being connected by an integral flexible hinge with one of the support regions and a second beam-like element having an integral flexible hinge connection with an adjacent support region. A least one local weakening geometry of the flexing region is located intermediate the first and second beam-like elements. In response to application of load from one of the relatively rotatable elements to the bearing, the beam-like elements and the local weakening geometry become flexed, causing the dynamic surface to deform and establish a hydrodynamic geometry for wedging lubricant into the dynamic interface.
Hydrodynamics of pronuclear migration
NASA Astrophysics Data System (ADS)
Nazockdast, Ehssan; Needleman, Daniel; Shelley, Michael
2014-11-01
Microtubule (MT) filaments play a key role in many processes involved in cell devision including spindle formation, chromosome segregation, and pronuclear positioning. We present a direct numerical technique to simulate MT dynamics in such processes. Our method includes hydrodynamically mediated interactions between MTs and other cytoskeletal objects, using singularity methods for Stokes flow. Long-ranged many-body hydrodynamic interactions are computed using a highly efficient and scalable fast multipole method, enabling the simulation of thousands of MTs. Our simulation method also takes into account the flexibility of MTs using Euler-Bernoulli beam theory as well as their dynamic instability. Using this technique, we simulate pronuclear migration in single-celled Caenorhabditis elegans embryos. Two different positioning mechanisms, based on the interactions of MTs with the motor proteins and the cell cortex, are explored: cytoplasmic pulling and cortical pushing. We find that although the pronuclear complex migrates towards the center of the cell in both models, the generated cytoplasmic flows are fundamentally different. This suggest that cytoplasmic flow visualization during pronuclear migration can be utilized to differentiate between the two mechanisms.
Hydrodynamics of Bacterial Cooperation
NASA Astrophysics Data System (ADS)
Petroff, A.; Libchaber, A.
2012-12-01
Over the course of the last several decades, the study of microbial communities has identified countless examples of cooperation between microorganisms. Generally—as in the case of quorum sensing—cooperation is coordinated by a chemical signal that diffuses through the community. Less well understood is a second class of cooperation that is mediated through physical interactions between individuals. To better understand how the bacteria use hydrodynamics to manipulate their environment and coordinate their actions, we study the sulfur-oxidizing bacterium Thiovulum majus. These bacteria live in the diffusive boundary layer just above the muddy bottoms of ponds. As buried organic material decays, sulfide diffuses out of the mud. Oxygen from the pond diffuses into the boundary layer from above. These bacteria form communities—called veils— which are able to transport nutrients through the boundary layer faster than diffusion, thereby increasing their metabolic rate. In these communities, bacteria attach to surfaces and swim in place. As millions of bacteria beat their flagella, the community induces a macroscopic fluid flow, which mix the boundary layer. Here we present experimental observations and mathematical models that elucidate the hydrodynamics linking the behavior of an individual bacterium to the collective dynamics of the community. We begin by characterizing the flow of water around an individual bacterium swimming in place. We then discuss the flow of water and nutrients around a small number of individuals. Finally, we present observations and models detailing the macroscopic dynamics of a Thiovulum veil.
Effect of Second-Order Hydrodynamics on a Floating Offshore Wind Turbine
Roald, L.; Jonkman, J.; Robertson, A.
2014-05-01
The design of offshore floating wind turbines uses design codes that can simulate the entire coupled system behavior. At the present, most codes include only first-order hydrodynamics, which induce forces and motions varying with the same frequency as the incident waves. Effects due to second- and higher-order hydrodynamics are often ignored in the offshore industry, because the forces induced typically are smaller than the first-order forces. In this report, first- and second-order hydrodynamic analysis used in the offshore oil and gas industry is applied to two different wind turbine concepts--a spar and a tension leg platform.
Hydrodynamic Efficiency of Ablation Propulsion with Pulsed Ion Beam
Buttapeng, Chainarong; Yazawa, Masaru; Harada, Nobuhiro; Suematsu, Hisayuki; Jiang Weihua; Yatsui, Kiyoshi
2006-05-02
This paper presents the hydrodynamic efficiency of ablation plasma produced by pulsed ion beam on the basis of the ion beam-target interaction. We used a one-dimensional hydrodynamic fluid compressible to study the physics involved namely an ablation acceleration behavior and analyzed it as a rocketlike model in order to investigate its hydrodynamic variables for propulsion applications. These variables were estimated by the concept of ablation driven implosion in terms of ablated mass fraction, implosion efficiency, and hydrodynamic energy conversion. Herein, the energy conversion efficiency of 17.5% was achieved. In addition, the results show maximum energy efficiency of the ablation process (ablation efficiency) of 67% meaning the efficiency with which pulsed ion beam energy-ablation plasma conversion. The effects of ion beam energy deposition depth to hydrodynamic efficiency were briefly discussed. Further, an evaluation of propulsive force with high specific impulse of 4000s, total impulse of 34mN and momentum to energy ratio in the range of {mu}N/W was also analyzed.
Combustion chamber analysis code
NASA Astrophysics Data System (ADS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-05-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Combustion chamber analysis code
NASA Technical Reports Server (NTRS)
Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.
1993-01-01
A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.
Block adaptive rate controlled image data compression
NASA Technical Reports Server (NTRS)
Rice, R. F.; Hilbert, E.; Lee, J.-J.; Schlutsmeyer, A.
1979-01-01
A block adaptive rate controlled (BARC) image data compression algorithm is described. It is noted that in the algorithm's principal rate controlled mode, image lines can be coded at selected rates by combining practical universal noiseless coding techniques with block adaptive adjustments in linear quantization. Compression of any source data at chosen rates of 3.0 bits/sample and above can be expected to yield visual image quality with imperceptible degradation. Exact reconstruction will be obtained if the one-dimensional difference entropy is below the selected compression rate. It is noted that the compressor can also be operated as a floating rate noiseless coder by simply not altering the input data quantization. Here, the universal noiseless coder ensures that the code rate is always close to the entropy. Application of BARC image data compression to the Galileo orbiter mission of Jupiter is considered.
General formulation of transverse hydrodynamics
Ryblewski, Radoslaw; Florkowski, Wojciech
2008-06-15
General formulation of hydrodynamics describing transversally thermalized matter created at the early stages of ultrarelativistic heavy-ion collisions is presented. Similarities and differences with the standard three-dimensionally thermalized relativistic hydrodynamics are discussed. The role of the conservation laws as well as the thermodynamic consistency of two-dimensional thermodynamic variables characterizing transversally thermalized matter is emphasized.
MONTE CARLO RADIATION-HYDRODYNAMICS WITH IMPLICIT METHODS
Roth, Nathaniel; Kasen, Daniel
2015-03-15
We explore the application of Monte Carlo transport methods to solving coupled radiation-hydrodynamics (RHD) problems. We use a time-dependent, frequency-dependent, three-dimensional radiation transport code that is special relativistic and includes some detailed microphysical interactions such as resonant line scattering. We couple the transport code to two different one-dimensional (non-relativistic) hydrodynamics solvers: a spherical Lagrangian scheme and a Eulerian Godunov solver. The gas–radiation energy coupling is treated implicitly, allowing us to take hydrodynamical time-steps that are much longer than the radiative cooling time. We validate the code and assess its performance using a suite of radiation hydrodynamical test problems, including ones in the radiation energy dominated regime. We also develop techniques that reduce the noise of the Monte Carlo estimated radiation force by using the spatial divergence of the radiation pressure tensor. The results suggest that Monte Carlo techniques hold promise for simulating the multi-dimensional RHD of astrophysical systems.
Prototype Mixed Finite Element Hydrodynamics Capability in ARES
Rieben, R N
2008-07-10
This document describes work on a prototype Mixed Finite Element Method (MFEM) hydrodynamics algorithm in the ARES code, and its application to a set of standard test problems. This work is motivated by the need for improvements to the algorithms used in the Lagrange hydrodynamics step to make them more robust. We begin by identifying the outstanding issues with traditional numerical hydrodynamics algorithms followed by a description of the proposed method and how it may address several of these longstanding issues. We give a theoretical overview of the proposed MFEM algorithm as well as a summary of the coding additions and modifications that were made to add this capability to the ARES code. We present results obtained with the new method on a set of canonical hydrodynamics test problems and demonstrate significant improvement in comparison to results obtained with traditional methods. We conclude with a summary of the issues still at hand and motivate the need for continued research to develop the proposed method into maturity.
Data compression for the microgravity experiments
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Whyte, Wayne A., Jr.; Anderson, Karen S.; Shalkhauser, Mary JO; Summers, Anne M.
1989-01-01
Researchers present the environment and conditions under which data compression is to be performed for the microgravity experiment. Also presented are some coding techniques that would be useful for coding in this environment. It should be emphasized that researchers are currently at the beginning of this program and the toolkit mentioned is far from complete.
A high-speed distortionless predictive image-compression scheme
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Smyth, P.; Wang, H.
1990-01-01
A high-speed distortionless predictive image-compression scheme that is based on differential pulse code modulation output modeling combined with efficient source-code design is introduced. Experimental results show that this scheme achieves compression that is very close to the difference entropy of the source.
Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets
NASA Astrophysics Data System (ADS)
Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron
2000-10-01
A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characterisitcs of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.
Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets
NASA Technical Reports Server (NTRS)
Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)
2000-01-01
A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.
Hydrodynamics of Peristaltic Propulsion
NASA Astrophysics Data System (ADS)
Athanassiadis, Athanasios; Hart, Douglas
2014-11-01
A curious class of animals called salps live in marine environments and self-propel by ejecting vortex rings much like jellyfish and squid. However, unlike other jetting creatures that siphon and eject water from one side of their body, salps produce vortex rings by pumping water through siphons on opposite ends of their hollow cylindrical bodies. In the simplest cases, it seems like some species of salp can successfully move by contracting just two siphons connected by an elastic body. When thought of as a chain of timed contractions, salp propulsion is reminiscent of peristaltic pumping applied to marine locomotion. Inspired by salps, we investigate the hydrodynamics of peristaltic propulsion, focusing on the scaling relationships that determine flow rate, thrust production, and energy usage in a model system. We discuss possible actuation methods for a model peristaltic vehicle, considering both the material and geometrical requirements for such a system.
Synchronization and hydrodynamic interactions
NASA Astrophysics Data System (ADS)
Powers, Thomas; Qian, Bian; Breuer, Kenneth
2008-03-01
Cilia and flagella commonly beat in a coordinated manner. Examples include the flagella that Volvox colonies use to move, the cilia that sweep foreign particles up out of the human airway, and the nodal cilia that set up the flow that determines the left-right axis in developing vertebrate embryos. In this talk we present an experimental study of how hydrodynamic interactions can lead to coordination in a simple idealized system: two nearby paddles driven with fixed torques in a highly viscous fluid. The paddles attain a synchronized state in which they rotate together with a phase difference of 90 degrees. We discuss how synchronization depends on system parameters and present numerical calculations using the method of regularized stokeslets.
Hydrodynamics, resurgence, and transasymptotics
NASA Astrophysics Data System (ADS)
Başar, Gökçe; Dunne, Gerald V.
2015-12-01
The second order hydrodynamical description of a homogeneous conformal plasma that undergoes a boost-invariant expansion is given by a single nonlinear ordinary differential equation, whose resurgent asymptotic properties we study, developing further the recent work of Heller and Spalinski [Phys. Rev. Lett. 115, 072501 (2015)]. Resurgence clearly identifies the nonhydrodynamic modes that are exponentially suppressed at late times, analogous to the quasinormal modes in gravitational language, organizing these modes in terms of a trans-series expansion. These modes are analogs of instantons in semiclassical expansions, where the damping rate plays the role of the instanton action. We show that this system displays the generic features of resurgence, with explicit quantitative relations between the fluctuations about different orders of these nonhydrodynamic modes. The imaginary part of the trans-series parameter is identified with the Stokes constant, and the real part with the freedom associated with initial conditions.
Reaching the hydrodynamic regime in a Bose-Einstein condensate by suppression of avalanches
Stam, K. M. R. van der; Meppelink, R.; Vogels, J. M.; Straten, P. van der
2007-03-15
We report the realization of a Bose-Einstein condensate (BEC) in the hydrodynamic regime. The hydrodynamic regime is reached by evaporative cooling at a relatively low density suppressing the effect of avalanches. With the suppression of avalanches a BEC containing more than 10{sup 8} atoms is produced. The collisional opacity can be tuned from the collisionless regime to a collisional opacity of more than 2 by compressing the trap after condensation. In the collisional opaque regime a significant heating of the cloud at time scales shorter than half of the radial trap period is measured, which is a direct proof that the BEC is hydrodynamic.
Lossy Compression of Haptic Data by Using DCT
NASA Astrophysics Data System (ADS)
Tanaka, Hiroyuki; Ohnishi, Kouhei
In this paper, lossy data compression of haptic data is presented and the results of its application to a motion copying system are described. Lossy data compression has been studied and practically applied in audio and image coding. Lossy data compression of the haptic data has been not studied extensively. Haptic data compression using discrete cosine transform (DCT) and modified DCT (MDCT) for haptic data storage are described in this paper. In the lossy compression, calculated DCT/MDCT coefficients are quantized by quantization vector. The quantized coefficients are further compressed by lossless coding based on Huffman coding. The compressed haptic data is applied to the motion copying system, and the results are provided.
Hydrodynamics of sediment threshold
NASA Astrophysics Data System (ADS)
Ali, Sk Zeeshan; Dey, Subhasish
2016-07-01
A novel hydrodynamic model for the threshold of cohesionless sediment particle motion under a steady unidirectional streamflow is presented. The hydrodynamic forces (drag and lift) acting on a solitary sediment particle resting over a closely packed bed formed by the identical sediment particles are the primary motivating forces. The drag force comprises of the form drag and form induced drag. The lift force includes the Saffman lift, Magnus lift, centrifugal lift, and turbulent lift. The points of action of the force system are appropriately obtained, for the first time, from the basics of micro-mechanics. The sediment threshold is envisioned as the rolling mode, which is the plausible mode to initiate a particle motion on the bed. The moment balance of the force system on the solitary particle about the pivoting point of rolling yields the governing equation. The conditions of sediment threshold under the hydraulically smooth, transitional, and rough flow regimes are examined. The effects of velocity fluctuations are addressed by applying the statistical theory of turbulence. This study shows that for a hindrance coefficient of 0.3, the threshold curve (threshold Shields parameter versus shear Reynolds number) has an excellent agreement with the experimental data of uniform sediments. However, most of the experimental data are bounded by the upper and lower limiting threshold curves, corresponding to the hindrance coefficients of 0.2 and 0.4, respectively. The threshold curve of this study is compared with those of previous researchers. The present model also agrees satisfactorily with the experimental data of nonuniform sediments.
Structured illumination temporal compressive microscopy
Yuan, Xin; Pang, Shuo
2016-01-01
We present a compressive video microscope based on structured illumination with incoherent light source. The source-side illumination coding scheme allows the emission photons being collected by the full aperture of the microscope objective, and thus is suitable for the fluorescence readout mode. A 2-step iterative reconstruction algorithm, termed BWISE, has been developed to address the mismatch between the illumination pattern size and the detector pixel size. Image sequences with a temporal compression ratio of 4:1 were demonstrated. PMID:27231586
A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics
Johnson, Jeffrey
2006-04-01
Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Because it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.
A hybrid numerical fluid dynamics code for resistive magnetohydrodynamics
2006-04-01
Spasmos is a computational fluid dynamics code that uses two numerical methods to solve the equations of resistive magnetohydrodynamic (MHD) flows in compressible, inviscid, conducting media[1]. The code is implemented as a set of libraries for the Python programming language[2]. It represents conducting and non-conducting gases and materials with uncomplicated (analytic) equations of state. It supports calculations in 1D, 2D, and 3D geometry, though only the 1D configuation has received significant testing to date. Becausemore » it uses the Python interpreter as a front end, users can easily write test programs to model systems with a variety of different numerical and physical parameters. Currently, the code includes 1D test programs for hydrodynamics (linear acoustic waves, the Sod weak shock[3], the Noh strong shock[4], the Sedov explosion[5], magnetic diffusion (decay of a magnetic pulse[6], a driven oscillatory "wine-cellar" problem[7], magnetic equilibrium), and magnetohydrodynamics (an advected magnetic pulse[8], linear MHD waves, a magnetized shock tube[9]). Spasmos current runs only in a serial configuration. In the future, it will use MPI for parallel computation.« less
Binary Pulse Compression Techniques for MST Radars
NASA Technical Reports Server (NTRS)
Woodman, R. F.; Sulzer, M. P.; Farley, D. T.
1984-01-01
In most mesosphere-stratosphere-troposphere (MST) applications pulsed radars are peak power limited and have excess average power capability. Short pulses are required for good range resolution but the problem of range biguity (signals received simultaneously from more than one altitude) sets a minimum limit on the interpulse period (IPP). Pulse compression is a echnique which allows more of the transmitter average power capacity to be used without scarificing range resolution. Binary phase coding methods for pulse compression are discussed. Many aspects of codes and decoding and their applications to MST experiments are addressed; this includes Barker codes and longer individual codes, and then complementary codes and other code sets. Software decoding, hardware decoders, and coherent integrators are also discussed.
Hybrid subband image coding scheme using DWT, DPCM, and ADPCM
NASA Astrophysics Data System (ADS)
Oh, Kyung-Seak; Kim, Sung-Jin; Joo, Chang-Bok
1998-07-01
Subband image coding techniques have received considerable attention as a powerful source coding ones. These techniques provide good compression results, and also can be extended for progressive transmission and multiresolution analysis. In this paper, we propose a hybrid subband image coding scheme using DWT (discrete wavelet transform), DPCM (differential pulse code modulation), and ADPCM (adaptive DPCM). This scheme produces both simple, but significant, image compression and transmission coding.
Hydrodynamic Elastic Magneto Plastic
1985-02-01
The HEMP code solves the conservation equations of two-dimensional elastic-plastic flow, in plane x-y coordinates or in cylindrical symmetry around the x-axis. Provisions for calculation of fixed boundaries, free surfaces, pistons, and boundary slide planes have been included, along with other special conditions.
Warm dense mater: another application for pulsed power hydrodynamics
Reinovsky, Robert Emil
2009-01-01
Pulsed Power Hydrodynamics (PPH) is an application of low-impedance pulsed power, and high magnetic field technology to the study of advanced hydrodynamic problems, instabilities, turbulence, and material properties. PPH can potentially be applied to the study of the properties of warm dense matter (WDM) as well. Exploration of the properties of warm dense matter such as equation of state, viscosity, conductivity is an emerging area of study focused on the behavior of matter at density near solid density (from 10% of solid density to slightly above solid density) and modest temperatures ({approx}1-10 eV). Conditions characteristic of WDM are difficult to obtain, and even more difficult to diagnose. One approach to producing WDM uses laser or particle beam heating of very small quantities of matter on timescales short compared to the subsequent hydrodynamic expansion timescales (isochoric heating) and a vigorous community of researchers are applying these techniques. Pulsed power hydrodynamic techniques, such as large convergence liner compression of a large volume, modest density, low temperature plasma to densities approaching solid density or through multiple shock compression and heating of normal density material between a massive, high density, energetic liner and a high density central 'anvil' are possible ways to reach relevant conditions. Another avenue to WDM conditions is through the explosion and subsequent expansion of a conductor (wire) against a high pressure (density) gas background (isobaric expansion) techniques. However, both techniques demand substantial energy, proper power conditioning and delivery, and an understanding of the hydrodynamic and instability processes that limit each technique. In this paper we will examine the challenges to pulsed power technology and to pulsed power systems presented by the opportunity to explore this interesting region of parameter space.
Recent development of hydrodynamic modeling
NASA Astrophysics Data System (ADS)
Hirano, Tetsufumi
2014-09-01
In this talk, I give an overview of recent development in hydrodynamic modeling of high-energy nuclear collisions. First, I briefly discuss about current situation of hydrodynamic modeling by showing results from the integrated dynamical approach in which Monte-Carlo calculation of initial conditions, quark-gluon fluid dynamics and hadronic cascading are combined. In particular, I focus on rescattering effects of strange hadrons on final observables. Next I highlight three topics in recent development in hydrodynamic modeling. These include (1) medium response to jet propagation in di-jet asymmetric events, (2) causal hydrodynamic fluctuation and its application to Bjorken expansion and (3) chiral magnetic wave from anomalous hydrodynamic simulations. (1) Recent CMS data suggest the existence of QGP response to propagation of jets. To investigate this phenomenon, we solve hydrodynamic equations with source term which exhibits deposition of energy and momentum from jets. We find a large number of low momentum particles are emitted at large angle from jet axis. This gives a novel interpretation of the CMS data. (2) It has been claimed that a matter created even in p-p/p-A collisions may behave like a fluid. However, fluctuation effects would be important in such a small system. We formulate relativistic fluctuating hydrodynamics and apply it to Bjorken expansion. We found the final multiplicity fluctuates around the mean value even if initial condition is fixed. This effect is relatively important in peripheral A-A collisions and p-p/p-A collisions. (3) Anomalous transport of the quark-gluon fluid is predicted when extremely high magnetic field is applied. We investigate this possibility by solving anomalous hydrodynamic equations. We found the difference of the elliptic flow parameter between positive and negative particles appears due to the chiral magnetic wave. Finally, I provide some personal perspective of hydrodynamic modeling of high energy nuclear collisions
Hydrodynamic body shape analysis and their impact on swimming performance.
Li, Tian-Zeng; Zhan, Jie-Min
2015-01-01
This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming. PMID:26898107
Hydrodynamic body shape analysis and their impact on swimming performance.
Li, Tian-Zeng; Zhan, Jie-Min
2015-01-01
This study presents the hydrodynamic characteristics of different adult male swimmer's body shape using computational fluid dynamics method. This simulation strategy is carried out by CFD fluent code with solving the 3D incompressible Navier-Stokes equations using the RNG k-ε turbulence closure. The water free surface is captured by the volume of fluid (VOF) method. A set of full body models, which is based on the anthropometrical characteristics of the most common male swimmers, is created by Computer Aided Industrial Design (CAID) software, Rhinoceros. The analysis of CFD results revealed that swimmer's body shape has a noticeable effect on the hydrodynamics performances. This explains why male swimmer with an inverted triangle body shape has good hydrodynamic characteristics for competitive swimming.
Constraining relativistic viscous hydrodynamical evolution
Martinez, Mauricio; Strickland, Michael
2009-04-15
We show that by requiring positivity of the longitudinal pressure it is possible to constrain the initial conditions one can use in second-order viscous hydrodynamical simulations of ultrarelativistic heavy-ion collisions. We demonstrate this explicitly for (0+1)-dimensional viscous hydrodynamics and discuss how the constraint extends to higher dimensions. Additionally, we present an analytic approximation to the solution of (0+1)-dimensional second-order viscous hydrodynamical evolution equations appropriate to describe the evolution of matter in an ultrarelativistic heavy-ion collision.
NASA Astrophysics Data System (ADS)
Takahashi, R.; Matsuo, M.; Ono, M.; Harii, K.; Chudo, H.; Okayasu, S.; Ieda, J.; Takahashi, S.; Maekawa, S.; Saitoh, E.
2016-01-01
Magnetohydrodynamic generation is the conversion of fluid kinetic energy into electricity. Such conversion, which has been applied to various types of electric power generation, is driven by the Lorentz force acting on charged particles and thus a magnetic field is necessary. On the other hand, recent studies of spintronics have revealed the similarity between the function of a magnetic field and that of spin-orbit interactions in condensed matter. This suggests the existence of an undiscovered route to realize the conversion of fluid dynamics into electricity without using magnetic fields. Here we show electric voltage generation from fluid dynamics free from magnetic fields; we excited liquid-metal flows in a narrow channel and observed longitudinal voltage generation in the liquid. This voltage has nothing to do with electrification or thermoelectric effects, but turned out to follow a universal scaling rule based on a spin-mediated scenario. The result shows that the observed voltage is caused by spin-current generation from a fluid motion: spin hydrodynamic generation. The observed phenomenon allows us to make mechanical spin-current and electric generators, opening a door to fluid spintronics.
Motoyama, Kazutaka; Morata, Oscar; Hasegawa, Tatsuhiko; Shang, Hsien; Krasnopolsky, Ruben
2015-07-20
A two-dimensional hydrochemical hybrid code, KM2, is constructed to deal with astrophysical problems that would require coupled hydrodynamical and chemical evolution. The code assumes axisymmetry in a cylindrical coordinate system and consists of two modules: a hydrodynamics module and a chemistry module. The hydrodynamics module solves hydrodynamics using a Godunov-type finite volume scheme and treats included chemical species as passively advected scalars. The chemistry module implicitly solves nonequilibrium chemistry and change of energy due to thermal processes with transfer of external ultraviolet radiation. Self-shielding effects on photodissociation of CO and H{sub 2} are included. In this introductory paper, the adopted numerical method is presented, along with code verifications using the hydrodynamics module and a benchmark on the chemistry module with reactions specific to a photon-dominated region (PDR). Finally, as an example of the expected capability, the hydrochemical evolution of a PDR is presented based on the PDR benchmark.
Data Compression--A Comparison of Methods. Computer Science and Technology.
ERIC Educational Resources Information Center
Aronson, Jules
This report delineates the theory and terminology of data compression. It surveys four data compression methods--null suppression, pattern substitution, statistical encoding, and telemetry compression--and relates them to a standard statistical coding problem, i.e., the noiseless coding problem. The well defined solution to that problem can serve…
Hydrodynamic synchronization of colloidal oscillators
Kotar, Jurij; Leoni, Marco; Bassetti, Bruno; Lagomarsino, Marco Cosentino; Cicuta, Pietro
2010-01-01
Two colloidal spheres are maintained in oscillation by switching the position of an optical trap when a sphere reaches a limit position, leading to oscillations that are bounded in amplitude but free in phase and period. The interaction between the oscillators is only through the hydrodynamic flow induced by their motion. We prove that in the absence of stochastic noise the antiphase dynamical state is stable, and we show how the period depends on coupling strength. Both features are observed experimentally. As the natural frequencies of the oscillators are made progressively different, the coordination is quickly lost. These results help one to understand the origin of hydrodynamic synchronization and how the dynamics can be tuned. Cilia and flagella are biological systems coupled hydrodynamically, exhibiting dramatic collective motions. We propose that weakly correlated phase fluctuations, with one of the oscillators typically precessing the other, are characteristic of hydrodynamically coupled systems in the presence of thermal noise. PMID:20385848
Reciprocal relations in dissipationless hydrodynamics
Melnikovsky, L. A.
2014-12-15
Hidden symmetry in dissipationless terms of arbitrary hydrodynamics equations is recognized. We demonstrate that all fluxes are generated by a single function and derive conventional Euler equations using the proposed formalism.
Relativistic hydrodynamics on graphic cards
NASA Astrophysics Data System (ADS)
Gerhard, Jochen; Lindenstruth, Volker; Bleicher, Marcus
2013-02-01
We show how to accelerate relativistic hydrodynamics simulations using graphic cards (graphic processing units, GPUs). These improvements are of highest relevance e.g. to the field of high-energetic nucleus-nucleus collisions at RHIC and LHC where (ideal and dissipative) relativistic hydrodynamics is used to calculate the evolution of hot and dense QCD matter. The results reported here are based on the Sharp And Smooth Transport Algorithm (SHASTA), which is employed in many hydrodynamical models and hybrid simulation packages, e.g. the Ultrarelativistic Quantum Molecular Dynamics model (UrQMD). We have redesigned the SHASTA using the OpenCL computing framework to work on accelerators like graphic processing units (GPUs) as well as on multi-core processors. With the redesign of the algorithm the hydrodynamic calculations have been accelerated by a factor 160 allowing for event-by-event calculations and better statistics in hybrid calculations.
Bridging fluctuating hydrodynamics and molecular dynamics simulations of fluids.
Voulgarakis, Nikolaos K; Chu, Jhih-Wei
2009-04-01
A new multiscale coarse-graining (CG) methodology is developed to bridge molecular and hydrodynamic models of a fluid. The hydrodynamic representation considered in this work is based on the equations of fluctuating hydrodynamics (FH). The essence of this method is a mapping from the position and velocity vectors of a snapshot of a molecular dynamics (MD) simulation to the field variables on Eulerian cells of a hydrodynamic representation. By explicit consideration of the effective lengthscale d(mol) that characterizes the volume of a molecule, the computed density fluctuations from MD via our mapping procedure have volume dependence that corresponds to a grand canonical ensemble of a cold liquid even when a small cell length (5-10 A) is used in a hydrodynamic representation. For TIP3P water at 300 K and 1 atm, d(mol) is found to be 2.4 A, corresponding to the excluded radius of a water molecule as revealed by its center-of-mass radial distribution function. By matching the density fluctuations and autocorrelation functions of momentum fields computed from solving the FH equations with those computed from MD simulation, the sound velocity and shear and bulk viscosities of a CG hydrodynamic model can be determined directly from MD. Furthermore, a novel staggered discretization scheme is developed for solving the FH equations of an isothermal compressive fluid in a three dimensional space with a central difference method. This scheme demonstrates high accuracy in satisfying the fluctuation-dissipation theorem. Since the causative relationship between field variables and fluxes is captured, we demonstrate that the staggered discretization scheme also predicts correct physical behaviors in simulating transient fluid flows. The techniques presented in this work may also be employed to design multiscale strategies for modeling complex fluids and macromolecules in solution. PMID:19355721
Lossless Compression on MRI Images Using SWT.
Anusuya, V; Raghavan, V Srinivasa; Kavitha, G
2014-10-01
Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.
Improved Compression of Wavelet-Transformed Images
NASA Technical Reports Server (NTRS)
Kiely, Aaron; Klimesh, Matthew
2005-01-01
A recently developed data-compression method is an adaptive technique for coding quantized wavelet-transformed data, nominally as part of a complete image-data compressor. Unlike some other approaches, this method admits a simple implementation and does not rely on the use of large code tables. A common data compression approach, particularly for images, is to perform a wavelet transform on the input data, and then losslessly compress a quantized version of the wavelet-transformed data. Under this compression approach, it is common for the quantized data to include long sequences, or runs, of zeros. The new coding method uses prefixfree codes for the nonnegative integers as part of an adaptive algorithm for compressing the quantized wavelet-transformed data by run-length coding. In the form of run-length coding used here, the data sequence to be encoded is parsed into strings consisting of some number (possibly 0) of zeros, followed by a nonzero value. The nonzero value and the length of the run of zeros are encoded. For a data stream that contains a sufficiently high frequency of zeros, this method is known to be more effective than using a single variable length code to encode each symbol. The specific prefix-free codes used are from two classes of variable-length codes: a class known as Golomb codes, and a class known as exponential-Golomb codes. The codes within each class are indexed by a single integer parameter. The present method uses exponential-Golomb codes for the lengths of the runs of zeros, and Golomb codes for the nonzero values. The code parameters within each code class are determined adaptively on the fly as compression proceeds, on the basis of statistics from previously encoded values. In particular, a simple adaptive method has been devised to select the parameter identifying the particular exponential-Golomb code to use. The method tracks the average number of bits used to encode recent runlengths, and takes the difference between this average
Radiation-hydrodynamic simulations of quasar disk winds
NASA Astrophysics Data System (ADS)
Higginbottom, N.
2015-09-01
Disk winds are a compelling candidate to provide geometrical unification between Broad Absorption Line QSOs (BALQSOs) and Type1 Quasars. However, the geometry of these winds, and even the driving mech- anism remain largely unknown. Progress has been made through RT simulations and theoretical analysis of simplified wind geometries but there are several outstanding issues including the problem of shielding the low ionization BAL gas from the intense X-ray radiation from the central corona, and also how to produce the strong emission lines which exemplify Type 1 Quasars. A complex, clumpy geometry may provide a solution, and a full hydrodynamic model in which such structure may well spontaneously develop is something we wish to investigate. We have already demonstrated that the previous generation of hydrodynamic models of BALQSOs suffer from the fact that radiation transfer (RT) was necessarily simplified to permit computation, thereby neglecting the effects of multiple scattering and reprocessing of photons within the wind (potentially very important processes). We have therefore embarked upon a project to marry together a RT code with a hydrodynamics code to permit full radiation hydrodynamics simulations to be carried out on QSO disk winds. Here we present details of the project and results to date.
Supernova-relevant hydrodynamic instability experiments on the Nova Laser
Kane, J.; arnett, D.; Remington, B.A.; Glendinning, S.G.; wallace, R.; Mangan, R.; Rubenchik, A.; Fryxell, B.A.
1997-04-18
Supernova 1987A focused attention on the critical role of hydrodynamic instabilities in the evolution of supernovae. To test the modeling of these instabilities we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. The target consists of two-layer planar package composed on 85 micron Cu backed by 500 micron CH2, having a single mode sinusoidal perturbation at the interface, with gamma = 200 microns, nuo + 20 microns. The Nova laser is used to generate a 10-15 Mbar (10- 15x10{sup 12} dynes/cm2) shock at the interface, which triggers perturbation growth, due to the Richtmyer-Meshov instability followed by the Raleigh-Taylor instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at the intermediate times, up to a few x10{sup 3} s. The experiment is modeled using the hydrodynamic codes HYADES and CALE, and the supernova code PROMETHEUS. We are designing experiments to test the differences in the growth of 2D vs 3D single mode perturbations; such differences may help explain the high observed velocities of radioactive core material in SN1987A. Results of the experiments and simulations are presented.
Code Verification of the HIGRAD Computational Fluid Dynamics Solver
Van Buren, Kendra L.; Canfield, Jesse M.; Hemez, Francois M.; Sauer, Jeremy A.
2012-05-04
The purpose of this report is to outline code and solution verification activities applied to HIGRAD, a Computational Fluid Dynamics (CFD) solver of the compressible Navier-Stokes equations developed at the Los Alamos National Laboratory, and used to simulate various phenomena such as the propagation of wildfires and atmospheric hydrodynamics. Code verification efforts, as described in this report, are an important first step to establish the credibility of numerical simulations. They provide evidence that the mathematical formulation is properly implemented without significant mistakes that would adversely impact the application of interest. Highly accurate analytical solutions are derived for four code verification test problems that exercise different aspects of the code. These test problems are referred to as: (i) the quiet start, (ii) the passive advection, (iii) the passive diffusion, and (iv) the piston-like problem. These problems are simulated using HIGRAD with different levels of mesh discretization and the numerical solutions are compared to their analytical counterparts. In addition, the rates of convergence are estimated to verify the numerical performance of the solver. The first three test problems produce numerical approximations as expected. The fourth test problem (piston-like) indicates the extent to which the code is able to simulate a 'mild' discontinuity, which is a condition that would typically be better handled by a Lagrangian formulation. The current investigation concludes that the numerical implementation of the solver performs as expected. The quality of solutions is sufficient to provide credible simulations of fluid flows around wind turbines. The main caveat associated to these findings is the low coverage provided by these four problems, and somewhat limited verification activities. A more comprehensive evaluation of HIGRAD may be beneficial for future studies.
Multigrid semi-implicit hydrodynamics revisited
Dendy, J.E.
1983-01-01
The multigrid method has for several years been very successful for simple equations like Laplace's equation on a rectangle. For more complicated situations, however, success has been more elusive. Indeeed, there are only a few applications in which the multigrid method is now being successfully used in complicated production codes. The one with which we are most familiar is the application by Alcouffe to TTDAMG. We are more familiar with this second application in which, for a set of test problems, TTDAMG ran seven to twenty times less expensively (on a CRAY-1 computer) than its best competitor. This impressive performance, in a field where a factor of two improvement is considered significant, encourages one to attempt the application of the multigrid method in other complicated situations. The application discussed in this paper was actually attempted several years ago. In that paper the multigrid method was applied to the pressure iteration in three Eulerian and Lagrangian codes. The application to the Eulerian codes, both incompressible and compressible, was successful, but the application to the Lagrangian code was less so. The reason given for this lack of success was that the differencing for the pressure equation in the Lagrangian code, SALE, was bad. In this paper, we examine again the application of multigrad to the pressure equation in SALE with the goal of succeeding this time without cheating.
Compression of surface myoelectric signals using MP3 encoding.
Chan, Adrian D C
2011-01-01
The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios). PMID:22255464
A modified Henyey method for computing radiative transfer hydrodynamics
NASA Technical Reports Server (NTRS)
Karp, A. H.
1975-01-01
The implicit hydrodynamic code of Kutter and Sparks (1972), which is limited to optically thick regions and employs the diffusion approximation for radiative transfer, is modified to include radiative transfer effects in the optically thin regions of a model star. A modified Henyey method is used to include the solution of the radiative transfer equation in this implicit code, and the convergence properties of this method are proven. A comparison is made between two hydrodynamic models of a classical Cepheid with a 12-day period, one of which was computed with the diffusion approximation and the other with the modified Henyey method. It is found that the two models produce nearly identical light and velocity curves, but differ in the fact that the former never has temperature inversions in the atmosphere while the latter does when sufficiently strong shocks are present.
Detection of the Compressed Primary Stellar Wind in eta Carinae
NASA Technical Reports Server (NTRS)
Teodoro, Mairan Macedo; Madura, Thomas I.; Gull, Theodore R.; Corcoran, Michael F.; Hamaguchi, K.
2014-01-01
A series of three HST/STIS spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from eta Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.
A New Approach for Fingerprint Image Compression
Mazieres, Bertrand
1997-12-01
The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.
Hydrodynamic escape from planetary atmospheres
NASA Astrophysics Data System (ADS)
Tian, Feng
Hydrodynamic escape is an important process in the formation and evolution of planetary atmospheres. Due to the existence of a singularity point near the transonic point, it is difficult to find transonic steady state solutions by solving the time-independent hydrodynamic equations. In addition to that, most previous works assume that all energy driving the escape flow is deposited in one narrow layer. This assumption not only results in less accurate solutions to the hydrodynamic escape problem, but also makes it difficult to include other chemical and physical processes in the hydrodynamic escape models. In this work, a numerical model describing the transonic hydrodynamic escape from planetary atmospheres is developed. A robust solution technique is used to solve the time dependent hydrodynamic equations. The method has been validated in an isothermal atmosphere where an analytical solution is available. The hydrodynamic model is applied to 3 cases: hydrogen escape from small orbit extrasolar planets, hydrogen escape from a hydrogen rich early Earth's atmosphere, and nitrogen/methane escape from Pluto's atmosphere. Results of simulations on extrasolar planets are in good agreement with the observations of the transiting extrasolar planet HD209458b. Hydrodynamic escape of hydrogen from other hypothetical close-in extrasolar planets are simulated and the influence of hydrogen escape on the long-term evolution of these extrasolar planets are discussed. Simulations on early Earth suggest that hydrodynamic escape of hydrogen from a hydrogen rich early Earth's atmosphere is about two orders magnitude slower than the diffusion limited escape rate. A hydrogen rich early Earth's atmosphere could have been maintained by the balance between the hydrogen escape and the supply of hydrogen into the atmosphere by volcanic outgassing. Origin of life may have occurred in the organic soup ocean created by the efficient formation of prebiotic molecules in the hydrogen rich early
An analysis of smoothed particle hydrodynamics
Swegle, J.W.; Attaway, S.W.; Heinstein, M.W.; Mello, F.J.; Hicks, D.L.
1994-03-01
SPH (Smoothed Particle Hydrodynamics) is a gridless Lagrangian technique which is appealing as a possible alternative to numerical techniques currently used to analyze high deformation impulsive loading events. In the present study, the SPH algorithm has been subjected to detailed testing and analysis to determine its applicability in the field of solid dynamics. An important result of the work is a rigorous von Neumann stability analysis which provides a simple criterion for the stability or instability of the method in terms of the stress state and the second derivative of the kernel function. Instability, which typically occurs only for solids in tension, results not from the numerical time integration algorithm, but because the SPH algorithm creates an effective stress with a negative modulus. The analysis provides insight into possible methods for removing the instability. Also, SPH has been coupled into the transient dynamics finite element code PRONTO, and a weighted residual derivation of the SPH equations has been obtained.
SPH simulation of high density hydrogen compression
NASA Astrophysics Data System (ADS)
Ferrel, R.; Romero, V.
1998-07-01
The density dependence of the electronic energy band gap of the hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We propose to investigate what will be the degree to which one can expect to maintain a shockless compression of hydrogen with a low temperature (close to that of a cold isentrope) and verify if it is possible to achieve metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures on the order of 100Mbars can be achieved while maintaining low temperatures. In order to better understand this multistage compression a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique does not use a grid structure it is well suited to analyzing spatial deformation processes. This analysis will be used to improve the design of possible multistage compression devices.
SPH Simulation of High Density Hydrogen Compression
NASA Astrophysics Data System (ADS)
Ferrel, R.; Romero, Van D.
1997-07-01
The density dependence of the electronic energy band gap of hydrogen has been studied with respect to the insulator-metal (IM) transition. The valence conduction band gap of solid hydrogen is about 15eV at zero pressure, therefore very high pressures are required to close the gap and achieve metallization. We are planning to investigate the degree to which shock less compression of hydrogen can be maintained at low temperature isentrope) and explore the possibililty of achieving metallization. Multistage compression will be driven by energetic materials in a cylindrical implosion system, in which we expect a slow compression rate that will maintain the low temperature in the isentropic compression. It is hoped that pressures of the order of 100 Mbars can be achieved while maintaining low temperatures. In order to understand this multistage compression better a smooth particle hydrodynamics (SPH) analysis has been performed. Since the SPH technique uses a gridless structure it is well suited to analyzing spatial deformation processes. This paper presents the analysis which will be used to improve the design of possible multistage compression devices.
Lossless Video Sequence Compression Using Adaptive Prediction
NASA Technical Reports Server (NTRS)
Li, Ying; Sayood, Khalid
2007-01-01
We present an adaptive lossless video compression algorithm based on predictive coding. The proposed algorithm exploits temporal, spatial, and spectral redundancies in a backward adaptive fashion with extremely low side information. The computational complexity is further reduced by using a caching strategy. We also study the relationship between the operational domain for the coder (wavelet or spatial) and the amount of temporal and spatial redundancy in the sequence being encoded. Experimental results show that the proposed scheme provides significant improvements in compression efficiencies.
Influence of the equation of state on the compression and heating of hydrogen
NASA Astrophysics Data System (ADS)
Tahir, N. A.; Juranek, H.; Shutov, A.; Redmer, R.; Piriz, A. R.; Temporal, M.; Varentsov, D.; Udrea, S.; Hoffmann, D. H.; Deutsch, C.; Lomonosov, I.; Fortov, V. E.
2003-05-01
This paper presents two-dimensional hydrodynamic simulations of implosion of a multilayered cylindrical target that is driven by an intense heavy ion beam which has an annular focal spot. The target consists of a hollow lead cylinder which is filled with hydrogen at one tenth of the solid density at room temperature. The beam is assumed to be made of 2.7-GeV/u uranium ions and six different cases for the beam intensity (total number of particles in the beam, N) are considered. In each of these six cases the particles are delivered in single bunches, 20 ns long. The simulations have been carried out using a two-dimensional hydrodynamic computer code BIG-2. A multiple shock reflection scheme is employed in these calculations that leads to very high densities of the compressed hydrogen while the temperature remains relatively low. In this study we have used two different equation-of-state models for hydrogen, namely, the SESAME data and a model that includes molecular dissociation that is based on a fluid variational theory in the neutral fluid region which is replaced by Padé approximation in the fully ionized plasma region. Our calculations show that the latter model predicts higher densities, higher pressures but lower temperatures compared to the SESAME model. The differences in the results are more pronounced for lower driving energies (lower beam intensities).
Load-Induced Hydrodynamic Lubrication of Porous Films.
Khosla, Tushar; Cremaldi, Joseph; Erickson, Jeffrey S; Pesika, Noshir S
2015-08-19
We present an exploratory study of the tribological properties and mechanisms of porous polymer surfaces under applied loads in aqueous media. We show how it is possible to change the lubrication regime from boundary lubrication to hydrodynamic lubrication even at relatively low shearing velocities by the addition of vertical pores to a compliant polymer. It is hypothesized that the compressed, pressurized liquid in the pores produces a repulsive hydrodynamic force as it extrudes from the pores. The presence of the fluid between two shearing surfaces results in low coefficients of friction (μ ≈ 0.31). The coefficient of friction is reduced further by using a boundary lubricant. The tribological properties are studied for a range of applied loads and shear velocities to demonstrate the potential applications of such materials in total joint replacement devices.
Hydrodynamic growth and mix experiments at National Ignition Facility
NASA Astrophysics Data System (ADS)
Smalyuk, V. A.; Caggiano, J.; Casey, D.; Cerjan, C.; Clark, D. S.; Edwards, J.; Grim, G.; Haan, S. W.; Hammel, B. A.; Hamza, A.; Hsing, W.; Hurricane, O.; Kilkenny, J.; Kline, J.; Knauer, J.; Landen, O.; McNaney, J.; Mintz, M.; Nikroo, A.; Parham, T.; Park, H.-S.; Pino, J.; Raman, K.; Remington, B. A.; Robey, H. F.; Rowley, D.; Tipton, R.; Weber, S.; Yeamans, C.
2016-03-01
Hydrodynamic growth and its effects on implosion performance and mix were studied at the National Ignition Facility (NIF). Spherical shells with pre-imposed 2D modulations were used to measure Rayleigh-Taylor (RT) instability growth in the acceleration phase of implosions using in-flight x-ray radiography. In addition, implosion performance and mix have been studied at peak compression using plastic shells filled with tritium gas and imbedding localized CD diagnostic layer in various locations in the ablator. Neutron yield and ion temperature of the DT fusion reactions were used as a measure of shell-gas mix, while neutron yield of the TT fusion reaction was used as a measure of implosion performance. The results have indicated that the low-mode hydrodynamic instabilities due to surface roughness were the primary culprits to yield degradation, with atomic ablator-gas mix playing a secondary role.
Load-Induced Hydrodynamic Lubrication of Porous Films.
Khosla, Tushar; Cremaldi, Joseph; Erickson, Jeffrey S; Pesika, Noshir S
2015-08-19
We present an exploratory study of the tribological properties and mechanisms of porous polymer surfaces under applied loads in aqueous media. We show how it is possible to change the lubrication regime from boundary lubrication to hydrodynamic lubrication even at relatively low shearing velocities by the addition of vertical pores to a compliant polymer. It is hypothesized that the compressed, pressurized liquid in the pores produces a repulsive hydrodynamic force as it extrudes from the pores. The presence of the fluid between two shearing surfaces results in low coefficients of friction (μ ≈ 0.31). The coefficient of friction is reduced further by using a boundary lubricant. The tribological properties are studied for a range of applied loads and shear velocities to demonstrate the potential applications of such materials in total joint replacement devices. PMID:26223011
Object-Based Image Compression
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.
2003-01-01
Image compression frequently supports reduced storage requirement in a computer system, as well as enhancement of effective channel bandwidth in a communication system, by decreasing the source bit rate through reduction of source redundancy. The majority of image compression techniques emphasize pixel-level operations, such as matching rectangular or elliptical sampling blocks taken from the source data stream, with exemplars stored in a database (e.g., a codebook in vector quantization or VQ). Alternatively, one can represent a source block via transformation, coefficient quantization, and selection of coefficients deemed significant for source content approximation in the decompressed image. This approach, called transform coding (TC), has predominated for several decades in the signal and image processing communities. A further technique that has been employed is the deduction of affine relationships from source properties such as local self-similarity, which supports the construction of adaptive codebooks in a self-VQ paradigm that has been called iterated function systems (IFS). Although VQ, TC, and IFS based compression algorithms have enjoyed varying levels of success for different types of applications, bit rate requirements, and image quality constraints, few of these algorithms examine the higher-level spatial structure of an image, and fewer still exploit this structure to enhance compression ratio. In this paper, we discuss a fourth type of compression algorithm, called object-based compression, which is based on research in joint segmentaton and compression, as well as previous research in the extraction of sketch-like representations from digital imagery. Here, large image regions that correspond to contiguous recognizeable objects or parts of objects are segmented from the source, then represented compactly in the compressed image. Segmentation is facilitated by source properties such as size, shape, texture, statistical properties, and spectral
Numerical simulations of glass impacts using smooth particle hydrodynamics
Mandell, D.A.; Wingate, C.A.
1996-05-01
As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data. {copyright} {ital 1996 American Institute of Physics.}
Numerical simulations of glass impacts using smooth particle hydrodynamics
Mandell, D.A.; Wingate, C.A.
1995-07-01
As part of a program to develop advanced hydrocode design tools, we have implemented a brittle fracture model for glass into the SPHINX smooth particle hydrodynamics code. We have evaluated this model and the code by predicting data from one-dimensional flyer plate impacts into glass. Since fractured glass properties, which are needed in the model, are not available, we did sensitivity studies of these properties, as well as sensitivity studies to determine the number of particles needed in the calculations. The numerical results are in good agreement with the data.
Compressed bitmap indices for efficient query processing
Wu, Kesheng; Otoo, Ekow; Shoshani, Arie
2001-09-30
Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Y. C.; Sayood, Khalid; Nelson, D. J.
1991-01-01
We present a layered packet video coding algorithm based on a progressive transmission scheme. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
A robust coding scheme for packet video
NASA Technical Reports Server (NTRS)
Chen, Yun-Chung; Sayood, Khalid; Nelson, Don J.
1992-01-01
A layered packet video coding algorithm based on a progressive transmission scheme is presented. The algorithm provides good compression and can handle significant packet loss with graceful degradation in the reconstruction sequence. Simulation results for various conditions are presented.
Implementation of image compression for printers
NASA Astrophysics Data System (ADS)
Oka, Kenichiro; Onishi, Masaru
1992-05-01
Printers process a large quantity of data when printing. For example, printing on an A3 size (297 mm X 420 mm) at 300 dpi resolution requires 17.4 million pixels, and about 66 Mbytes in a 32-bits/pixel-color image composed of yellow (Y), magenta (M), cyan (C) and black components. Containing such a large capacity of random access memories (RAMs) in a printer causes an increase in both the cost and size of memory circuits. Thus, image compression techniques are examined in this study to cope with these problems. A still-image coding, being standardized by JPEG (Joint Photographic Experts Group), will presumably be utilized for image communications or image data bases. The JPEG scheme can compress natural images efficiently but it is unsuitable for text or computer graphics (CG) images for degradation of restored images. This scheme, therefore, cannot be implemented for printers which require good image quality. We studied codings which are more suitable for printers than the JPEG scheme. Two criteria were considered to select a coding scheme for printers: (1) no visible degradation of input printer images and (2) capability of image edition. Especially in terms of criteria (2), a fixed-length coding was adopted; an arbitrary pixel data code can be easily read out of an image memory. We then implemented an image coding scheme in our new sublimation full-color printer. Input image data are compressed by coding before being written into an image memory.
Turbulence in Compressible Flows
NASA Technical Reports Server (NTRS)
1997-01-01
Lecture notes for the AGARD Fluid Dynamics Panel (FDP) Special Course on 'Turbulence in Compressible Flows' have been assembled in this report. The following topics were covered: Compressible Turbulent Boundary Layers, Compressible Turbulent Free Shear Layers, Turbulent Combustion, DNS/LES and RANS Simulations of Compressible Turbulent Flows, and Case Studies of Applications of Turbulence Models in Aerospace.
New equation of state model for hydrodynamic applications
Young, D.A.; Barbee, T.W. III; Rogers, F.J.
1997-07-01
Two new theoretical methods for computing the equation of state of hot, dense matter are discussed.The ab initio phonon theory gives a first-principles calculation of lattice frequencies, which can be used to compare theory and experiment for isothermal and shock compression of solids. The ACTEX dense plasma theory has been improved to allow it to be compared directly with ultrahigh pressure shock data on low-Z materials. The comparisons with experiment are good, suggesting that these models will be useful in generating global EOS tables for hydrodynamic simulations.
Black brane entropy and hydrodynamics
Booth, Ivan; Heller, Michal P.; Spalinski, Michal
2011-03-15
Recent advances in holography have led to the formulation of fluid-gravity duality, a remarkable connection between the hydrodynamics of certain strongly coupled media and dynamics of higher dimensional black holes. This paper introduces a correspondence between phenomenologically defined entropy currents in relativistic hydrodynamics and 'generalized horizons' of near-equilibrium black objects in a dual gravitational description. A general formula is given, expressing the divergence of the entropy current in terms of geometric objects which appear naturally in the gravity dual geometry. The proposed definition is explicitly covariant with respect to boundary diffeomorphisms and reproduces known results when evaluated for the event horizon.
Abnormal pressures as hydrodynamic phenomena
Neuzil, C.E.
1995-01-01
So-called abnormal pressures, subsurface fluid pressures significantly higher or lower than hydrostatic, have excited speculation about their origin since subsurface exploration first encountered them. Two distinct conceptual models for abnormal pressures have gained currency among earth scientists. The static model sees abnormal pressures generally as relict features preserved by a virtual absence of fluid flow over geologic time. The hydrodynamic model instead envisions abnormal pressures as phenomena in which flow usually plays an important role. This paper develops the theoretical framework for abnormal pressures as hydrodynamic phenomena, shows that it explains the manifold occurrences of abnormal pressures, and examines the implications of this approach. -from Author
Transform coding for space applications
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.
NASA Astrophysics Data System (ADS)
Henestroza, Enrique; Logan, B. Grant; Perkins, L. John
2011-03-01
The HYDRA radiation-hydrodynamics code [M. M. Marinak et al., Phys. Plasmas 8, 2275 (2001)] is used to explore one-sided axial target illumination with annular and solid-profile uranium ion beams at 60 GeV to compress and ignite deuterium-tritium fuel filling the volume of metal cases with cross sections in the shape of an "X" (X-target). Quasi-three-dimensional, spherical fuel compression of the fuel toward the X-vertex on axis is obtained by controlling the geometry of the case, the timing, power, and radii of three annuli of ion beams for compression, and the hydroeffects of those beams heating the case as well as the fuel. Scaling projections suggest that this target may be capable of assembling large fuel masses resulting in high fusion yields at modest drive energies. Initial two-dimensional calculations have achieved fuel compression ratios of up to 150X solid density, with an areal density ρR of about 1 g/cm2. At these currently modest fuel densities, fast ignition pulses of 3 MJ, 60 GeV, 50 ps, and radius of 300 μm are injected through a hole in the X-case on axis to further heat the fuel to propagating burn conditions. The resulting burn waves are observed to propagate throughout the tamped fuel mass, with fusion yields of about 300 MJ. Tamping is found to be important, but radiation drive to be unimportant, to the fuel compression. Rayleigh-Taylor instability mix is found to have a minor impact on ignition and subsequent fuel burn-up.
Henestroza, Enrique; Logan, B. Grant; Perkins, L. John
2011-03-15
The HYDRA radiation-hydrodynamics code [M. M. Marinak et al., Phys. Plasmas 8, 2275 (2001)] is used to explore one-sided axial target illumination with annular and solid-profile uranium ion beams at 60 GeV to compress and ignite deuterium-tritium fuel filling the volume of metal cases with cross sections in the shape of an ''X'' (X-target). Quasi-three-dimensional, spherical fuel compression of the fuel toward the X-vertex on axis is obtained by controlling the geometry of the case, the timing, power, and radii of three annuli of ion beams for compression, and the hydroeffects of those beams heating the case as well as the fuel. Scaling projections suggest that this target may be capable of assembling large fuel masses resulting in high fusion yields at modest drive energies. Initial two-dimensional calculations have achieved fuel compression ratios of up to 150X solid density, with an areal density {rho}R of about 1 g/cm{sup 2}. At these currently modest fuel densities, fast ignition pulses of 3 MJ, 60 GeV, 50 ps, and radius of 300 {mu}m are injected through a hole in the X-case on axis to further heat the fuel to propagating burn conditions. The resulting burn waves are observed to propagate throughout the tamped fuel mass, with fusion yields of about 300 MJ. Tamping is found to be important, but radiation drive to be unimportant, to the fuel compression. Rayleigh-Taylor instability mix is found to have a minor impact on ignition and subsequent fuel burn-up.
Perceptually lossy compression of documents
NASA Astrophysics Data System (ADS)
Beretta, Giordano B.; Bhaskaran, Vasudev; Konstantinides, Konstantinos; Natarajan, Balas R.
1997-06-01
The main cost of owning a facsimile machine consists of the telephone charges for the communications, thus short transmission times are a key feature for facsimile machines. Similarly, on a packet-routed service such as the Internet, a low number of packets is essential to avoid operator wait times. Concomitantly, the user expectations have increased considerably. In facsimile, the switch from binary to full color increases the data size by a factor of 24. On the Internet, the switch from plain text American Standard Code for Information Interchange (ASCII) encoded files to files marked up in the Hypertext Markup Language (HTML) with ample embedded graphics has increased the size of transactions by several orders of magnitude. A common compressing method for raster files in these applications in the Joint Photographic Experts Group (JPEG) method, because efficient implementations are readily available. In this method the implementors design the discrete quantization tables (DQT) and the Huffman tables (HT) to maximize the compression factor while maintaining the introduced artifacts at the threshold of perceptual detectability. Unfortunately the achieved compression rates are unsatisfactory for applications such as color facsimile and World Wide Web (W3) browsing. We present a design methodology for image-independent DQTs that while producing perceptually lossy data, does not impair the reading performance of users. Combined with a text sharpening algorithm that compensates for scanning device limitations, the methodology presented in this paper allows us to achieve compression ratios near 1:100.
A programmable image compression system
NASA Technical Reports Server (NTRS)
Farrelle, Paul M.
1989-01-01
A programmable image compression system which has the necessary flexibility to address diverse imaging needs is described. It can compress and expand single frame video images (monochrome or color) as well as documents and graphics (black and white or color) for archival or transmission applications. Through software control, the compression mode can be set for lossless or controlled quality coding; the image size and bit depth can be varied; and the image source and destination devices can be readily changed. Despite the large combination of image data types, image sources, and algorithms, the system provides a simple consistent interface to the programmer. This system (OPTIPAC) is based on the TITMS320C25 digital signal processing (DSP) chip and has been implemented as a co-processor board for an IBM PC-AT compatible computer. The underlying philosophy can readily be applied to different hardware platforms. By using multiple DSP chips or incorporating algorithm specific chips, the compression and expansion times can be significantly reduced to meet performance requirements.
Fluctuating Hydrodynamics Confronts the Rapidity Dependence of Transverse Momentum Fluctuations
NASA Astrophysics Data System (ADS)
Pokharel, Rajendra; Gavin, Sean; Moschelli, George
2012-10-01
Interest in the development of the theory of fluctuating hydrodynamics is growing [1]. Early efforts suggested that viscous diffusion broadens the rapidity dependence of transverse momentum correlations [2]. That work stimulated an experimental analysis by STAR [3]. We attack this new data along two fronts. First, we compute STAR's fluctuation observable using the NeXSPheRIO code, which combines fluctuating initial conditions from a string fragmentation model with deterministic viscosity-free hydrodynamic evolution. We find that NeXSPheRIO produces a longitudinal narrowing, in contrast to the data. Second, we study the hydrodynamic evolution using second order causal viscous hydrodynamics including Langevin noise. We obtain a deterministic evolution equation for the transverse momentum density correlation function. We use the latest theoretical equations of state and transport coefficients to compute STAR's observable. The results are in excellent accord with the measured broadening. In addition, we predict features of the distribution that can distinguish 2nd and 1st order diffusion. [4pt] [1] J. Kapusta, B. Mueller, M. Stephanov, arXiv:1112.6405 [nucl-th].[0pt] [2] S. Gavin and M. Abdel-Aziz, Phys. Rev. Lett. 97, 162302 (2006)[0pt] [3] H. Agakishiev et al., STAR, STAR, Phys. Lett. B704
Study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
Sayood, Khalid; Chen, Yun-Chung; Kipp, G.
1992-01-01
The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design.
Hydrodynamic slip in silicon nanochannels
NASA Astrophysics Data System (ADS)
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2016-03-01
Equilibrium and nonequilibrium molecular dynamics simulations were performed to better understand the hydrodynamic behavior of water flowing through silicon nanochannels. The water-silicon interaction potential was calibrated by means of size-independent molecular dynamics simulations of silicon wettability. The wettability of silicon was found to be dependent on the strength of the water-silicon interaction and the structure of the underlying surface. As a result, the anisotropy was found to be an important factor in the wettability of these types of crystalline solids. Using this premise as a fundamental starting point, the hydrodynamic slip in nanoconfined water was characterized using both equilibrium and nonequilibrium calculations of the slip length under low shear rate operating conditions. As was the case for the wettability analysis, the hydrodynamic slip was found to be dependent on the wetted solid surface atomic structure. Additionally, the interfacial water liquid structure was the most significant parameter to describe the hydrodynamic boundary condition. The calibration of the water-silicon interaction potential performed by matching the experimental contact angle of silicon led to the verification of the no-slip condition, experimentally reported for silicon nanochannels at low shear rates.
Topics in fluctuating nonlinear hydrodynamics
Milner, S.T.
1986-01-01
Models of fluctuating nonlinear hydrodynamics have enjoyed much success in explaining the effect of long-wavelength fluctuations in diverse hydrodynamic systems. This thesis explores two such problems; in both, the body of hydrodynamic assumptions powerfully constrains the predictions of a well-posed theory. The effects of layer fluctuations in smectic-A liquid crystals are first examined. The static theory (introduced by Grinstein and Pelcovits) is reviewed. Ward identities, resulting from the arbitrariness of the layering direction, are derived and exploited. The static results motivate an examination of dynamic fluctuation effects. A new sound-damping experiment is proposed that would probe singular dependence of viscosities on applied stress. A theory of Procaccia and Gitterman that reaction rates of chemically reacting binary mixtures are drastically reduced near their thermodynamic critical points is analyzed. Hydrodynamic arguments and Van Hove theory are applied, concluding that the PG idea is drastically slowed, and spatially varying composition fluctuations are at best slowed down over a narrow range of wavenumbers.
Hydrodynamical approach to transport in nanostructures
NASA Astrophysics Data System (ADS)
D'Agosta, Roberto; di Ventra, Massimiliano
2006-03-01
The electrical resistance induced by the viscous properties of the electron liquid has been recently derived.^1 In addition, it is known that the geometric constriction experienced by electrons flowing in a nanostructure gives rise to a fast ``collisional'' process.^2 These facts allow us to derive Navier-Stokes-type of equations, and therefore describe the electron flow on a par with a viscous and compressible liquid. By using this hydrodynamical approach we study electron transport in nanoscale systems and derive the conditions for the transition from laminar to turbulent flow in quantum point contacts. We also discuss possible experimental tests of these predictions. ^1 N. Sai, M. Zwolak, G. Vignale, and M. Di Ventra, Phys. Rev. Lett. 94, 186810 (2005).^2 M. Di Ventra and T.N. Todorov, J. Phys. Cond. Matt. 16, 8025 (2004); N. Bushong, N. Sai and, M. Di Ventra, Nano Lett. (in press).Work supported by the Department of Energy (DE-FG02-05ER46204)
Hydrodynamics of diatom chains and semiflexible fibres.
Nguyen, Hoa; Fauci, Lisa
2014-07-01
Diatoms are non-motile, unicellular phytoplankton that have the ability to form colonies in the form of chains. Depending upon the species of diatoms and the linking structures that hold the cells together, these chains can be quite stiff or very flexible. Recently, the bending rigidities of some species of diatom chains have been quantified. In an effort to understand the role of flexibility in nutrient uptake and aggregate formation, we begin by developing a three-dimensional model of the coupled elastic-hydrodynamic system of a diatom chain moving in an incompressible fluid. We find that simple beam theory does a good job of describing diatom chain deformation in a parabolic flow when its ends are tethered, but does not tell the whole story of chain deformations when they are subjected to compressive stresses in shear. While motivated by the fluid dynamics of diatom chains, our computational model of semiflexible fibres illustrates features that apply widely to other systems. The use of an adaptive immersed boundary framework allows us to capture complicated buckling and recovery dynamics of long, semiflexible fibres in shear. PMID:24789565
Hydrodynamic Simulations of Gaseous Argon Shock Experiments
NASA Astrophysics Data System (ADS)
Garcia, Daniel; Dattelbaum, Dana; Goodwin, Peter; Morris, John; Sheffield, Stephen; Burkett, Michael
2015-06-01
The lack of published Argon gas shock data motivated an evaluation of the Argon Equation of State (EOS) in gas phase initial density regimes never before reached. In particular, these regimes include initial pressures in the range of 200-500 psi (0.025 - 0.056 g/cc) and initial shock velocities around 0.2 cm/ μs. The objective of the numerical evaluation was to develop a physical understanding of the EOS behavior of shocked and subsequently multiply re-shocked Argon gas initially pressurized to 200-500 psi through Pagosa numerical hydrodynamic simulations utilizing the SESAME equation of state. Pagosa is a Los Alamos National Laboratory 2-D and 3-D Eulerian hydrocode capable of modeling high velocity compressible flow with multiple materials. The approach involved the use of gas gun experiments to evaluate the shock and multiple re-shock behavior of pressurized Argon gas to validate Pagosa simulations and the SESAME EOS. Additionally, the diagnostic capability within the experiments allowed for the EOS to be fully constrained with measured shock velocity, particle velocity and temperature. The simulations demonstrate excellent agreement with the experiments in the shock velocity/particle velocity space, but note unanticipated differences in the ionization front temperatures.
An image compression technique for use on token ring networks
NASA Astrophysics Data System (ADS)
Gorjala, B.; Sayood, Khalid; Meempat, G.
1992-12-01
A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.
An image compression technique for use on token ring networks
NASA Technical Reports Server (NTRS)
Gorjala, B.; Sayood, Khalid; Meempat, G.
1992-01-01
A low complexity technique for compression of images for transmission over local area networks is presented. The technique uses the synchronous traffic as a side channel for improving the performance of an adaptive differential pulse code modulation (ADPCM) based coder.
An Efficient Variable-Length Data-Compression Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Kiely, Aaron B.
1996-01-01
Adaptive variable-length coding scheme for compression of stream of independent and identically distributed source data involves either Huffman code or alternating run-length Huffman (ARH) code, depending on characteristics of data. Enables efficient compression of output of lossless or lossy precompression process, with speed and simplicity greater than those of older coding schemes developed for same purpose. In addition, scheme suitable for parallel implementation on hardware with modular structure, provides for rapid adaptation to changing data source, compatible with block orientation to alleviate memory requirements, ensures efficiency over wide range of entropy, and easily combined with such other communication schemes as those for containment of errors and for packetization.
Multi-dimensional computer simulation of MHD combustor hydrodynamics
Berry, G.F.; Chang, S.L.; Lottes, S.A.; Rimkus, W.A.
1991-04-04
Argonne National Laboratory is investigating the nonreacting jet-gas mixing patterns in an MHD second stage combustor by using a two-dimensional multi-phase hydrodynamics computer program and a three-dimensional single-phase hydrodynamics computer program. The computer simulations are intended to enhance the understanding of flow and mixing patterns in the combustor, which in turn may lead to improvement of the downstream MHD channel performance. A two-dimensional steady state computer model, based on mass and momentum conservation laws for multiple gas species, is used to simulate the hydrodynamics of the combustor in which a jet of oxidizer is injected into an unconfined cross-stream gas flow. A three-dimensional code is used to examine the effects of the side walls and the distributed jet flows on the non-reacting jet-gas mixing patterns. The code solves the conservation equations of mass, momentum, and energy, and a transport equation of a turbulence parameter and allows permeable surfaces to be specified for any computational cell. 17 refs., 25 figs.
Calibrating an updated smoothed particle hydrodynamics scheme within gcd+
NASA Astrophysics Data System (ADS)
Kawata, D.; Okamoto, T.; Gibson, B. K.; Barnes, D. J.; Cen, R.
2013-01-01
We adapt a modern scheme of smoothed particle hydrodynamics (SPH) to our tree N-body/SPH galactic chemodynamics code gcd+. The applied scheme includes implementations of the artificial viscosity switch and artificial thermal conductivity proposed by Morris & Monaghan, Rosswog & Price and Price to model discontinuities and Kelvin-Helmholtz instabilities more accurately. We first present hydrodynamics test simulations and contrast the results to runs undertaken without artificial viscosity switch or thermal conduction. In addition, we also explore the different levels of smoothing by adopting larger or smaller smoothing lengths, i.e. a larger or smaller number of neighbour particles, Nnb. We demonstrate that the new version of gcd+ is capable of modelling Kelvin-Helmholtz instabilities to a similar level as the mesh code, athena. From the Gresho vortex, point-like explosion and self-similar collapse tests, we conclude that setting the smoothing length to keep Nnb as high as ˜58 is preferable to adopting smaller smoothing lengths. We present our optimized parameter sets from the hydrodynamics tests.
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
Compressing bitmap indexes for faster search operations
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2002-04-25
In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.
An Analog Processor for Image Compression
NASA Technical Reports Server (NTRS)
Tawel, R.
1992-01-01
This paper describes a novel analog Vector Array Processor (VAP) that was designed for use in real-time and ultra-low power image compression applications. This custom CMOS processor is based architectually on the Vector Quantization (VQ) algorithm in image coding, and the hardware implementation fully exploits the inherent parallelism built-in the VQ algorithm.
Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei
2009-03-01
Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.
Volume transport and generalized hydrodynamic equations for monatomic fluids.
Eu, Byung Chan
2008-10-01
In this paper, the effects of volume transport on the generalized hydrodynamic equations for a pure simple fluid are examined from the standpoint of statistical mechanics and, in particular, kinetic theory of fluids. First, we derive the generalized hydrodynamic equations, namely, the constitutive equations for the stress tensor and heat flux for a single-component monatomic fluid, from the generalized Boltzmann equation in the presence of volume transport. Then their linear steady-state solutions are derived and examined with regard to the effects of volume transport on them. The generalized hydrodynamic equations and linear constitutive relations obtained for nonconserved variables make it possible to assess Brenner's proposition [Physica A 349, 11 (2005); Physica A 349, 60 (2005)] for volume transport and attendant mass and volume velocities as well as the effects of volume transport on the Newtonian law of viscosity, compression/dilatation (bulk viscosity) phenomena, and Fourier's law of heat conduction. On the basis of study made, it is concluded that the notion of volume transport is sufficiently significant to retain in irreversible thermodynamics of fluids and fluid mechanics.
Optimality Of Variable-Length Codes
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.
1994-01-01
Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.
Hydrodynamic Simulations of Close and Contact Binary Systems using Bipolytropes
NASA Astrophysics Data System (ADS)
Kadam, Kundan
2016-01-01
I will present the results of hydrodynamic simulations of close and contact bipolytropic binary systems. This project is motivated by the peculiar case of the red nova, V1309 Sco, which is indeed a merger of a contact binary. Both the stars are believed to have evolved off the main sequence by the time of the merger and possess a small helium core. In order to represent the binary accurately, I need a core-envelope structure for both the stars. I have achieved this using bipolytropes or composite polytropes. For the simulations, I use an explicit 3D Eulerian hydrodynamics code in cylindrical coordinates. I will discuss the evolution and merger scenarios of systems with different mass ratios and core mass fractions as well as the effects due to the treatment of the adiabatic exponent.
Development and Implementation of Radiation-Hydrodynamics Verification Test Problems
Marcath, Matthew J.; Wang, Matthew Y.; Ramsey, Scott D.
2012-08-22
Analytic solutions to the radiation-hydrodynamic equations are useful for verifying any large-scale numerical simulation software that solves the same set of equations. The one-dimensional, spherically symmetric Coggeshall No.9 and No.11 analytic solutions, cell-averaged over a uniform-grid have been developed to analyze the corresponding solutions from the Los Alamos National Laboratory Eulerian Applications Project radiation-hydrodynamics code xRAGE. These Coggeshall solutions have been shown to be independent of heat conduction, providing a unique opportunity for comparison with xRAGE solutions with and without the heat conduction module. Solution convergence was analyzed based on radial step size. Since no shocks are involved in either problem and the solutions are smooth, second-order convergence was expected for both cases. The global L1 errors were used to estimate the convergence rates with and without the heat conduction module implemented.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
3D MHD Simulations of Spheromak Compression
NASA Astrophysics Data System (ADS)
Stuber, James E.; Woodruff, Simon; O'Bryan, John; Romero-Talamas, Carlos A.; Darpa Spheromak Team
2015-11-01
The adiabatic compression of compact tori could lead to a compact and hence low cost fusion energy system. The critical scientific issues in spheromak compression relate both to confinement properties and to the stability of the configuration undergoing compression. We present results from the NIMROD code modified with the addition of magnetic field coils that allow us to examine the role of rotation on the stability and confinement of the spheromak (extending prior work for the FRC). We present results from a scan in initial rotation, from 0 to 100km/s. We show that strong rotational shear (10km/s over 1cm) occurs. We compare the simulation results with analytic scaling relations for adiabatic compression. Work performed under DARPA grant N66001-14-1-4044.
Universal lossless compression algorithm for textual images
NASA Astrophysics Data System (ADS)
al Zahir, Saif
2012-03-01
In recent years, an unparalleled volume of textual information has been transported over the Internet via email, chatting, blogging, tweeting, digital libraries, and information retrieval systems. As the volume of text data has now exceeded 40% of the total volume of traffic on the Internet, compressing textual data becomes imperative. Many sophisticated algorithms were introduced and employed for this purpose including Huffman encoding, arithmetic encoding, the Ziv-Lempel family, Dynamic Markov Compression, and Burrow-Wheeler Transform. My research presents novel universal algorithm for compressing textual images. The algorithm comprises two parts: 1. a universal fixed-to-variable codebook; and 2. our row and column elimination coding scheme. Simulation results on a large number of Arabic, Persian, and Hebrew textual images show that this algorithm has a compression ratio of nearly 87%, which exceeds published results including JBIG2.
Anomalous hydrodynamics of fractional quantum Hall states
Wiegmann, P.
2013-09-15
We propose a comprehensive framework for quantum hydrodynamics of the fractional quantum Hall (FQH) states. We suggest that the electronic fluid in the FQH regime can be phenomenologically described by the quantized hydrodynamics of vortices in an incompressible rotating liquid. We demonstrate that such hydrodynamics captures all major features of FQH states, including the subtle effect of the Lorentz shear stress. We present a consistent quantization of the hydrodynamics of an incompressible fluid, providing a powerful framework to study the FQH effect and superfluids. We obtain the quantum hydrodynamics of the vortex flow by quantizing the Kirchhoff equations for vortex dynamics.
Microbunching and RF Compression
Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.
2010-05-23
Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.
Supernova-relevant hydrodynamic instability experiments on the Nova laser
Kane, J.; Arnett, D.; Remington, B. A.; Glendinning, S. G.; Wallace, R.; Managan, R.; Rubenchik, A.; Fryxell, B. A.
1997-04-15
Observations of Supernova 1987A suggest that hydrodynamic instabilities play a critical role in the evolution of supernovae. To test the modeling of these instabilities, and to study instability issues which are difficult to model, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. We use the Nova laser to generate a 10-15 Mbar shock at the interface between an 85 {mu}m thick layer of Cu and a 500 {mu}m layer of CH{sub 2}; our first target is planar. We impose a single mode sinusoidal material perturbation at the interface with {lambda}=200 {mu}m, {eta}{sub 0}=20 {mu}m, causing perturbation growth by the RM instability as the shock accelerates the interface, and by RT instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few x10{sup 3} s. We use the supernova code PROMETHEUS and the hydrodynamics codes HYADES and CALE to model the experiment. We are designing further experiments to compare results for 2D vs. 3D single mode perturbations; high resolution 3D modeling requires prohibitive time and computing resources, but we can perform and study 3D experiments as easily as 2D experiments. Low resolution simulations suggest that the perturbations grow 50% faster in 3D than in 2D; such a difference may help explain the high observed velocities of radioactive core material in SN1987A. We present the results of the experiments and simulations.
Supernova-relevant hydrodynamic instability experiments on the Nova laser
Kane, J.; Arnett, D.; Remington, B.A.; Glendinning, S.G.; Wallace, R.; Managan, R.; Rubenchik, A. Rubenchik, A. Fryxell, B.A.
1997-04-01
Observations of Supernova 1987A suggest that hydrodynamic instabilities play a critical role in the evolution of supernovae. To test the modeling of these instabilities, and to study instability issues which are difficult to model, we are developing laboratory experiments of hydrodynamic mixing under conditions relevant to supernovae. We use the Nova laser to generate a 10{endash}15 Mbar shock at the interface between an 85 {mu}m thick layer of Cu and a 500 {mu}m layer of CH{sub 2}; our first target is planar. We impose a single mode sinusoidal material perturbation at the interface with {lambda}=200{mu}m, {eta}{sub 0}=20{mu}m, causing perturbation growth by the RM instability as the shock accelerates the interface, and by RT instability as the interface decelerates. This resembles the hydrodynamics of the He-H interface of a Type II supernova at intermediate times, up to a few {times}10{sup 3}s. We use the supernova code PROMETHEUS and the hydrodynamics codes HYADES and CALE to model the experiment. We are designing further experiments to compare results for 2D vs. 3D single mode perturbations; high resolution 3D modeling requires prohibitive time and computing resources, but we can perform and study 3D experiments as easily as 2D experiments. Low resolution simulations suggest that the perturbations grow 50{percent} faster in 3D than in 2D; such a difference may help explain the high observed velocities of radioactive core material in SN1987A. We present the results of the experiments and simulations. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Parashar, Manish; Zabusky, Norman
2001-11-01
We merge the PPM compressible algorithm (VH-1 (M. Parashar, Grid Adaptive Computational Engine. 2001. http://www.caip.rutgers.edu/ parashar/TASSL/Projects/GrACE/Gmain.html)) with the new Grid Adaptive Computation Engine (GrACE ( J. M. Blondin and J. Hawley, Virginia Hydrodynamics Code. http://wonka.physics.ncsu.edu/pub/VH-1/index.html)). The latter environment uses the Berger-Oliger AMR algorithm and has many high-performance computation features such as data parallelism, data and computation locality, etc. We discuss the performance (scaling) resulting from examining the space of four parameters: top coarse level resolution; number of refinement levels; number of processors; duration of calculation. We validate the new code by applying it to the 2D shock-curtain interaction problem (N. J. Zabusky and S. Zhang. "Shock - planar curtain interactions in 2D: Emergence of vortex double layers, vortex projectiles and decaying stratified turbulence." Revised submitted Physics of Fluids, July, 2001.). We discuss the visualization and quantification of AMR data sets.
Hildebrand, Richard J.; Wozniak, John J.
2001-01-01
A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.
Models of Jupiter's growth incorporating thermal and hydrodynamic constraints
NASA Astrophysics Data System (ADS)
Lissauer, Jack J.; Hubickyj, Olenka; D'Angelo, Gennaro; Bodenheimer, Peter
2009-02-01
We model the growth of Jupiter via core nucleated accretion, applying constraints from hydrodynamical processes that result from the disk-planet interaction. We compute the planet's internal structure using a well tested planetary formation code that is based upon a Henyey-type stellar evolution code. The planet's interactions with the protoplanetary disk are calculated using 3-D hydrodynamic simulations. Previous models of Jupiter's growth have taken the radius of the planet to be approximately one Hill sphere radius, R. However, 3-D hydrodynamic simulations show that only gas within ˜0.25R remains bound to the planet, with the more distant gas eventually participating in the shear flow of the protoplanetary disk. Therefore in our new simulations, the planet's outer boundary is placed at the location where gas has the thermal energy to reach the portion of the flow not bound to the planet. We find that the smaller radius increases the time required for planetary growth by ˜5%. Thermal pressure limits the rate at which a planet less than a few dozen times as massive as Earth can accumulate gas from the protoplanetary disk, whereas hydrodynamics regulates the growth rate for more massive planets. Within a moderately viscous disk, the accretion rate peaks when the planet's mass is about equal to the mass of Saturn. In a less viscous disk hydrodynamical limits to accretion are smaller, and the accretion rate peaks at lower mass. Observations suggest that the typical lifetime of massive disks around young stellar objects is ˜3 Myr. To account for the dissipation of such disks, we perform some of our simulations of Jupiter's growth within a disk whose surface gas density decreases on this timescale. In all of the cases that we simulate, the planet's effective radiating temperature rises to well above 1000 K soon after hydrodynamic limits begin to control the rate of gas accretion and the planet's distended envelope begins to contract. According to our simulations
MAFCO: A Compression Tool for MAF Files
Matos, Luís M. O.; Neves, António J. R.; Pratas, Diogo; Pinho, Armando J.
2015-01-01
In the last decade, the cost of genomic sequencing has been decreasing so much that researchers all over the world accumulate huge amounts of data for present and future use. These genomic data need to be efficiently stored, because storage cost is not decreasing as fast as the cost of sequencing. In order to overcome this problem, the most popular general-purpose compression tool, gzip, is usually used. However, these tools were not specifically designed to compress this kind of data, and often fall short when the intention is to reduce the data size as much as possible. There are several compression algorithms available, even for genomic data, but very few have been designed to deal with Whole Genome Alignments, containing alignments between entire genomes of several species. In this paper, we present a lossless compression tool, MAFCO, specifically designed to compress MAF (Multiple Alignment Format) files. Compared to gzip, the proposed tool attains a compression gain from 34% to 57%, depending on the data set. When compared to a recent dedicated method, which is not compatible with some data sets, the compression gain of MAFCO is about 9%. Both source-code and binaries for several operating systems are freely available for non-commercial use at: http://bioinformatics.ua.pt/software/mafco. PMID:25816229
Compressible turbulent mixing: Effects of compressibility
NASA Astrophysics Data System (ADS)
Ni, Qionglin
2016-04-01
We studied by numerical simulations the effects of compressibility on passive scalar transport in stationary compressible turbulence. The turbulent Mach number varied from zero to unity. The difference in driven forcing was the magnitude ratio of compressive to solenoidal modes. In the inertial range, the scalar spectrum followed the k-5 /3 scaling and suffered negligible influence from the compressibility. The growth of the Mach number showed (1) a first reduction and second enhancement in the transfer of scalar flux; (2) an increase in the skewness and flatness of the scalar derivative and a decrease in the mixed skewness and flatness of the velocity-scalar derivatives; (3) a first stronger and second weaker intermittency of scalar relative to that of velocity; and (4) an increase in the intermittency parameter which measures the intermittency of scalar in the dissipative range. Furthermore, the growth of the compressive mode of forcing indicated (1) a decrease in the intermittency parameter and (2) less efficiency in enhancing scalar mixing. The visualization of scalar dissipation showed that, in the solenoidal-forced flow, the field was filled with the small-scale, highly convoluted structures, while in the compressive-forced flow, the field was exhibited as the regions dominated by the large-scale motions of rarefaction and compression.
An efficient compression scheme for bitmap indices
Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie
2004-04-13
When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time
Kubilius, Jonas
2014-01-01
Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.
Dynamic code block size for JPEG 2000
NASA Astrophysics Data System (ADS)
Tsai, Ping-Sing; LeCornec, Yann
2008-02-01
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Particle hydrodynamics with tessellation techniques
NASA Astrophysics Data System (ADS)
Heß, Steffen; Springel, Volker
2010-08-01
Lagrangian smoothed particle hydrodynamics (SPH) is a well-established approach to model fluids in astrophysical problems, thanks to its geometric flexibility and ability to automatically adjust the spatial resolution to the clumping of matter. However, a number of recent studies have emphasized inaccuracies of SPH in the treatment of fluid instabilities. The origin of these numerical problems can be traced back to spurious surface effects across contact discontinuities, and to SPH's inherent prevention of mixing at the particle level. We here investigate a new fluid particle model where the density estimate is carried out with the help of an auxiliary mesh constructed as the Voronoi tessellation of the simulation particles instead of an adaptive smoothing kernel. This Voronoi-based approach improves the ability of the scheme to represent sharp contact discontinuities. We show that this eliminates spurious surface tension effects present in SPH and that play a role in suppressing certain fluid instabilities. We find that the new `Voronoi Particle Hydrodynamics' (VPH) described here produces comparable results to SPH in shocks, and better ones in turbulent regimes of pure hydrodynamical simulations. We also discuss formulations of the artificial viscosity needed in this scheme and how judiciously chosen correction forces can be derived in order to maintain a high degree of particle order and hence a regular Voronoi mesh. This is especially helpful in simulating self-gravitating fluids with existing gravity solvers used for N-body simulations.
The Gulf of Lions' hydrodynamics
NASA Astrophysics Data System (ADS)
Millot, Claude
1990-09-01
From an hydrodynamical point of view, the Gulf of Lions can be considered as a very complex region, because several intense and highly variable phenomena compete simultaneously. These processes include the powerful general circulation along the continental slope, the formation of dense water both on the shelf and offshore, a seasonal variation of stratification and the extreme energies associated with meteorological conditions. The cloudless atmospheric conditions encountered generally in the northwestern Mediterranean Sea have enabled us to make use of, over more than 10 years, large use of various satellite imageries. The large space and time variability of the hydrodynamical features, a complex topography and a noticeable fishing activity, represent certain difficulties to the collection of observations in situ. We have obtained, therefore, only a few current time series on the slope; those obtained on the shelf only cover the summer period. Models have been elaborated to help us understand the reasons for the general circulation. Observational programmes to be carried out in the forthcoming years will probably provide us with more definitive results on the Gulf of Lions' hydrodynamics.
Computation of Thermally Perfect Compressible Flow Properties
NASA Technical Reports Server (NTRS)
Witte, David W.; Tatum, Kenneth E.; Williams, S. Blake
1996-01-01
A set of compressible flow relations for a thermally perfect, calorically imperfect gas are derived for a value of c(sub p) (specific heat at constant pressure) expressed as a polynomial function of temperature and developed into a computer program, referred to as the Thermally Perfect Gas (TPG) code. The code is available free from the NASA Langley Software Server at URL http://www.larc.nasa.gov/LSS. The code produces tables of compressible flow properties similar to those found in NACA Report 1135. Unlike the NACA Report 1135 tables which are valid only in the calorically perfect temperature regime the TPG code results are also valid in the thermally perfect, calorically imperfect temperature regime, giving the TPG code a considerably larger range of temperature application. Accuracy of the TPG code in the calorically perfect and in the thermally perfect, calorically imperfect temperature regimes are verified by comparisons with the methods of NACA Report 1135. The advantages of the TPG code compared to the thermally perfect, calorically imperfect method of NACA Report 1135 are its applicability to any type of gas (monatomic, diatomic, triatomic, or polyatomic) or any specified mixture of gases, ease-of-use, and tabulated results.
A Microfluidic-based Hydrodynamic Trap for Single Particles
Johnson-Chavarria, Eric M.; Tanyeri, Melikhan; Schroeder, Charles M.
2011-01-01
The ability to confine and manipulate single particles in free solution is a key enabling technology for fundamental and applied science. Methods for particle trapping based on optical, magnetic, electrokinetic, and acoustic techniques have led to major advancements in physics and biology ranging from the molecular to cellular level. In this article, we introduce a new microfluidic-based technique for particle trapping and manipulation based solely on hydrodynamic fluid flow. Using this method, we demonstrate trapping of micro- and nano-scale particles in aqueous solutions for long time scales. The hydrodynamic trap consists of an integrated microfluidic device with a cross-slot channel geometry where two opposing laminar streams converge, thereby generating a planar extensional flow with a fluid stagnation point (zero-velocity point). In this device, particles are confined at the trap center by active control of the flow field to maintain particle position at the fluid stagnation point. In this manner, particles are effectively trapped in free solution using a feedback control algorithm implemented with a custom-built LabVIEW code. The control algorithm consists of image acquisition for a particle in the microfluidic device, followed by particle tracking, determination of particle centroid position, and active adjustment of fluid flow by regulating the pressure applied to an on-chip pneumatic valve using a pressure regulator. In this way, the on-chip dynamic metering valve functions to regulate the relative flow rates in the outlet channels, thereby enabling fine-scale control of stagnation point position and particle trapping. The microfluidic-based hydrodynamic trap exhibits several advantages as a method for particle trapping. Hydrodynamic trapping is possible for any arbitrary particle without specific requirements on the physical or chemical properties of the trapped object. In addition, hydrodynamic trapping enables confinement of a "single" target object in
DualSPHysics: Open-source parallel CFD solver based on Smoothed Particle Hydrodynamics (SPH)
NASA Astrophysics Data System (ADS)
Crespo, A. J. C.; Domínguez, J. M.; Rogers, B. D.; Gómez-Gesteira, M.; Longshaw, S.; Canelas, R.; Vacondio, R.; Barreiro, A.; García-Feal, O.
2015-02-01
DualSPHysics is a hardware accelerated Smoothed Particle Hydrodynamics code developed to solve free-surface flow problems. DualSPHysics is an open-source code developed and released under the terms of GNU General Public License (GPLv3). Along with the source code, a complete documentation that makes easy the compilation and execution of the source files is also distributed. The code has been shown to be efficient and reliable. The parallel power computing of Graphics Computing Units (GPUs) is used to accelerate DualSPHysics by up to two orders of magnitude compared to the performance of the serial version.
Simulating Rayleigh-Taylor (RT) instability using PPM hydrodynamics @scale on Roadrunner (u)
Woodward, Paul R; Dimonte, Guy; Rockefeller, Gabriel M; Fryer, Christopher L; Dimonte, Guy; Dai, W; Kares, R. J.
2011-01-05
The effect of initial conditions on the self-similar growth of the RT instability is investigated using a hydrodynamics code based on the piecewise-parabolic-method (PPM). The PPM code was converted to the hybrid architecture of Roadrunner in order to perform the simulations at extremely high speed and spatial resolution. This paper describes the code conversion to the Cell processor, the scaling studies to 12 CU's on Roadrunner and results on the dependence of the RT growth rate on initial conditions. The relevance of the Roadrunner implementation of this PPM code to other existing and anticipated computer architectures is also discussed.
Cosmological Hydrodynamics on a Moving Mesh
NASA Astrophysics Data System (ADS)
Hernquist, Lars
We propose to construct a model for the visible Universe using cosmological simulations of structure formation. Our simulations will include both dark matter and baryons, and will employ two entirely different schemes for evolving the gas: smoothed particle hydrodynamics (SPH) and a moving mesh approach as incorporated in the new code, AREPO. By performing simulations that are otherwise nearly identical, except for the hydrodynamics solver, we will isolate and understand differences in the properties of galaxies, galaxy groups and clusters, and the intergalactic medium caused by the computational approach that have plagued efforts to understand galaxy formation for nearly two decades. By performing simulations at different levels of resolution and with increasingly complex treatments of the gas physics, we will identify the results that are converged numerically and that are robust with respect to variations in unresolved physical processes, especially those related to star formation, black hole growth, and related feedback effects. In this manner, we aim to undertake a research program that will redefine the state of the art in cosmological hydrodynamics and galaxy formation. In particular, we will focus our scientific efforts on understanding: 1) the formation of galactic disks in a cosmological context; 2) the physical state of diffuse gas in galaxy clusters and groups so that they can be used as high-precision probes of cosmology; 3) the nature of gas inflows into galaxy halos and the subsequent accretion of gas by forming disks; 4) the co-evolution of galaxies and galaxy clusters with their central supermassive black holes and the implications of related feedback for galaxy evolution and the dichotomy between blue and red galaxies; 5) the physical state of the intergalactic medium (IGM) and the evolution of the metallicity of the IGM; and 6) the reaction of dark matter around galaxies to galaxy formation. Our proposed work will be of immediate significance for
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
Anderson, S R; Bihari, B L; Salari, K; Woodward, C S
2006-12-29
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Moving mesh cosmology: the hydrodynamics of galaxy formation
NASA Astrophysics Data System (ADS)
Sijacki, Debora; Vogelsberger, Mark; Kereš, Dušan; Springel, Volker; Hernquist, Lars
2012-08-01
We present a detailed comparison between the well-known smoothed particle hydrodynamics (SPH) code GADGET and the new moving-mesh code AREPO on a number of hydrodynamical test problems. Through a variety of numerical experiments with increasing complexity we establish a clear link between simple test problems with known analytic solutions and systematic numerical effects seen in cosmological simulations of galaxy formation. Our tests demonstrate deficiencies of the SPH method in several sectors. These accuracy problems not only manifest themselves in idealized hydrodynamical tests, but also propagate to more realistic simulation set-ups of galaxy formation, ultimately affecting local and global gas properties in the full cosmological framework, as highlighted in companion papers by Vogelsberger et al. and Keres et al. We find that an inadequate treatment of fluid instabilities in GADGET suppresses entropy generation by mixing, underestimates vorticity generation in curved shocks and prevents efficient gas stripping from infalling substructures. Moreover, in idealized tests of inside-out disc formation, the convergence rate of gas disc sizes is much slower in GADGET due to spurious angular momentum transport. In simulations where we follow the interaction between a forming central disc and orbiting substructures in a massive halo, the final disc morphology is strikingly different in the two codes. In AREPO, gas from infalling substructures is readily depleted and incorporated into the host halo atmosphere, facilitating the formation of an extended central disc. Conversely, gaseous sub-clumps are more coherent in GADGET simulations, morphologically transforming the central disc as they impact it. The numerical artefacts of the SPH solver are particularly severe for poorly resolved flows, and thus inevitably affect cosmological simulations due to their inherently hierarchical nature. Taken together, our numerical experiments clearly demonstrate that AREPO delivers a
Hydrodynamics and Material Properties Experiments Using Pulsed Power Techniques*
NASA Astrophysics Data System (ADS)
Reinovsky, Robert; Trainor, R. James
1999-06-01
Within the last few years a new approach for exploring dynamic material properties and advanced hydrodynamics at extreme conditions has joined the traditional techniques of high velocity guns,and explosives. The principle tool is the high precision, magnetically imploded, near-solid density liner. The most attractive pulse power system for driving such experiments is an ultra-highcurrent, low impedance, microsecond time-scale source that is economical both the build and operate. Liner specifications vary but in general share requirements for a high degree of symmetry and uniformity after implosion. When imploded in free flight to velocities 10-30 km/sec and kinetic energies of from one to 25 MJ/cm of height, liners are attractive impactors for producing strong (>10 Mbar) shocks in the target. Simple geometries can, in principle, produce multi-shock environments to reach off-hugoniot states. When filled with a compressible material, liners can deliver almost adiabatic compression to the target. When the liner surrounds a (small)nearly incompressible target material, for example a condensed noble gas, a liner can deliver enormous pressure to the target almost isentropically. When the compressible material is a magnetic field, flux compression can results in compressed fields above 1000 tesla in macroscopic volumes for materials studies.In this paper we will review basic scaling argumentsthat set the scale of environments available. We will mention the pulse power technology under development at Los Alamos and provide a summary of results from experiments testing solid metal liners under magnetic drive and a few examples of experiments performed withinterim systems. Other papers in this conference will provide specific proposals for pulse power driven shock-wave experiments.
Image and video compression for HDR content
NASA Astrophysics Data System (ADS)
Zhang, Yang; Reinhard, Erik; Agrafiotis, Dimitris; Bull, David R.
2012-10-01
High Dynamic Range (HDR) technology can offer high levels of immersion with a dynamic range meeting and exceeding that of the Human Visual System (HVS). A primary drawback with HDR images and video is that memory and bandwidth requirements are significantly higher than for conventional images and video. Many bits can be wasted coding redundant imperceptible information. The challenge is therefore to develop means for efficiently compressing HDR imagery to a manageable bit rate without compromising perceptual quality. In this paper, we build on previous work of ours and propose a compression method for both HDR images and video, based on an HVS optimised wavelet subband weighting method. The method has been fully integrated into a JPEG 2000 codec for HDR image compression and implemented as a pre-processing step for HDR video coding (an H.264 codec is used as the host codec for video compression). Experimental results indicate that the proposed method outperforms previous approaches and operates in accordance with characteristics of the HVS, tested objectively using a HDR Visible Difference Predictor (VDP). Aiming to further improve the compression performance of our method, we additionally present the results of a psychophysical experiment, carried out with the aid of a high dynamic range display, to determine the difference in the noise visibility threshold between HDR and Standard Dynamic Range (SDR) luminance edge masking. Our findings show that noise has increased visibility on the bright side of a luminance edge. Masking is more consistent on the darker side of the edge.
Response of a supersonic boundary layer to a compression corner
NASA Technical Reports Server (NTRS)
Vandromme, D.; Zeman, O.
1992-01-01
On the basis of direct numerical simulations of rapidly compressed turbulence, Zeman and Coleman have developed a model to represent rapid directional compression contribution to the pressure dilatation term in the turbulent kinetic energy equation. The model has been implemented in the CFD code for simulation of supersonic compression corner flow with an extended separated region. The computational results have shown a significant improvement with respect to the baseline solution given by the standard k- epsilon turbulence model which does not contain any compressibility corrections.
NASA Astrophysics Data System (ADS)
Nagakura, H.; Richers, S.; Ott, C. D.; Iwakami, W.; Furusawa, S.; Sumiyoshi, K.; Yamada, S.; Matsufuru, H.; Imakura, A.
2016-10-01
We have developed a 7-dimensional Full Boltzmann-neutrino-radiation-hydrodynamical code and carried out ab-initio axisymmetric CCSNe simulations. I will talk about main results of our simulations and also discuss current ongoing projects.
Mathematical models for the EPIC code
Buchanan, H.L.
1981-06-03
EPIC is a fluid/envelope type computer code designed to study the energetics and dynamics of a high energy, high current electron beam passing through a gas. The code is essentially two dimensional (x, r, t) and assumes an axisymmetric beam whose r.m.s. radius is governed by an envelope model. Electromagnetic fields, background gas chemistry, and gas hydrodynamics (density channel evolution) are all calculated self-consistently as functions of r, x, and t. The code is a collection of five major subroutines, each of which is described in some detail in this report.
The environmental fluid dynamics code (EFDC) was used to study the three dimensional (3D) circulation, water quality, and ecology in Narragansett Bay, RI. Predictions of the Bay hydrodynamics included the behavior of the water surface elevation, currents, salinity, and temperatur...
Generating optimal initial conditions for smooth particle hydrodynamics (SPH) simulations
Diehl, Steven; Rockefeller, Gabriel M; Fryer, Christopher L
2008-01-01
We present a new optimal method to set up initial conditions for Smooth Particle Hydrodynamics Simulations, which may also be of interest for N-body simulations. This new method is based on weighted Voronoi tesselations (WVTs) and can meet arbitrarily complex spatial resolution requirements. We conduct a comprehensive review of existing SPH setup methods, and outline their advantages, limitations and drawbacks. A serial version of our WVT setup method is publicly available and we give detailed instruction on how to easily implement the new method on top of an existing parallel SPH code.
Low torque hydrodynamic lip geometry for bi-directional rotation seals
Dietle, Lannie L.; Schroeder, John E.
2011-11-15
A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.
Low torque hydrodynamic lip geometry for bi-directional rotation seals
Dietle, Lannie L.; Schroeder, John E.
2009-07-21
A hydrodynamically lubricating geometry for the generally circular dynamic sealing lip of rotary seals that are employed to partition a lubricant from an environment. The dynamic sealing lip is provided for establishing compressed sealing engagement with a relatively rotatable surface, and for wedging a film of lubricating fluid into the interface between the dynamic sealing lip and the relatively rotatable surface in response to relative rotation that may occur in the clockwise or the counter-clockwise direction. A wave form incorporating an elongated dimple provides the gradual convergence, efficient impingement angle, and gradual interfacial contact pressure rise that are conducive to efficient hydrodynamic wedging. Skewed elevated contact pressure zones produced by compression edge effects provide for controlled lubricant movement within the dynamic sealing interface between the seal and the relatively rotatable surface, producing enhanced lubrication and low running torque.
Stochastic Hard-Sphere Dynamics for Hydrodynamics of Non-Ideal Fluids
Donev, A; Alder, B J; Garcia, A L
2008-02-26
A novel stochastic fluid model is proposed with a nonideal structure factor consistent with compressibility, and adjustable transport coefficients. This stochastic hard-sphere dynamics (SHSD) algorithm is a modification of the direct simulation Monte Carlo algorithm and has several computational advantages over event-driven hard-sphere molecular dynamics. Surprisingly, SHSD results in an equation of state and a pair correlation function identical to that of a deterministic Hamiltonian system of penetrable spheres interacting with linear core pair potentials. The fluctuating hydrodynamic behavior of the SHSD fluid is verified for the Brownian motion of a nanoparticle suspended in a compressible solvent.
Annual Report: Hydrodynamics and Radiative Hydrodynamics with Astrophysical Applications
R. Paul Drake
2005-12-01
We report the ongoing work of our group in hydrodynamics and radiative hydrodynamics with astrophysical applications. During the period of the existing grant, we have carried out two types of experiments at the Omega laser. One set of experiments has studied radiatively collapsing shocks, obtaining high-quality scaling data using a backlit pinhole and obtaining the first (ever, anywhere) Thomson-scattering data from a radiative shock. Other experiments have studied the deeply nonlinear development of the Rayleigh-Taylor (RT) instability from complex initial conditions, obtaining the first (ever, anywhere) dual-axis radiographic data using backlit pinholes and ungated detectors. All these experiments have applications to astrophysics, discussed in the corresponding papers either in print or in preparation. We also have obtained preliminary radiographs of experimental targets using our x-ray source. The targets for the experiments have been assembled at Michigan, where we also prepare many of the simple components. The above activities, in addition to a variety of data analysis and design projects, provide good experience for graduate and undergraduates students. In the process of doing this research we have built a research group that uses such work to train junior scientists.
DETECTION OF THE COMPRESSED PRIMARY STELLAR WIND IN {eta} CARINAE
Teodoro, M.; Madura, T. I.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.
2013-08-10
A series of three Hubble Space Telescope/Space Telescope Imaging Spectrograph spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from {eta} Carinae. We identify these arcs with the shell-like structures, seen in the three-dimensional hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.
Detection of the Compressed Primary Stellar Wind in eta Carinae*
NASA Technical Reports Server (NTRS)
Teodoro, M.; Madura, T. I.; Gull, T. R.; Corcoran, M. F.; Hamaguchi, K.
2013-01-01
A series of three Hubble Space Telescope Space Telescope Imaging Spectrograph (HST/STIS) spectroscopic mappings, spaced approximately one year apart, reveal three partial arcs in [Fe II] and [Ni II] emissions moving outward from ? Carinae. We identify these arcs with the shell-like structures, seen in the 3D hydrodynamical simulations, formed by compression of the primary wind by the secondary wind during periastron passages.
Optimal lift force on vesicles near a compressible substrate
NASA Astrophysics Data System (ADS)
Beaucourt, J.; Biben, T.; Misbah, C.
2004-08-01
The dynamics of vesicles near a compressible substrate mimicking the glycocalyx layer of the internal part of blood vessels reveals the existence of an optimal lift force due to an elasto-hydrodynamic coupling between the counter flow and the deformation of the wall. An estimation of the order of magnitude of the optimal elastic modulus reveals that it lies within the physiological range, which may have important consequences for the dynamic of blood cells (leucocytes or red blood cells).
Hydrodynamics of maneuvering bodies: LDRD final report
Kempka, S.N.; Strickland, J.H.
1994-01-01
The objective of the ``Hydrodynamics of Maneuvering Bodies`` LDRD project was to develop a Lagrangian, vorticity-based numerical simulation of the fluid dynamics associated with a maneuvering submarine. Three major tasks were completed. First, a vortex model to simulate the wake behind a maneuvering submarine was completed, assuming the flow to be inviscid and of constant density. Several simulations were performed for a dive maneuver, each requiring less than 20 cpu seconds on a workstation. The technical details of the model and the simulations are described in a separate document, but are reviewed herein. Second, a gridless method to simulate diffusion processes was developed that has significant advantages over previous Lagrangian diffusion models. In this model, viscous diffusion of vorticity is represented by moving vortices at a diffusion velocity, and expanding the vortices as specified by the kinematics for a compressible velocity field. This work has also been documented previously, and is only reviewed herein. The third major task completed was the development of a vortex model to describe inviscid internal wave phenomena, and is the focus of this document. Internal wave phenomena in the stratified ocean can affect an evolving wake, and thus must be considered for naval applications. The vortex model for internal wave phenomena includes a new formulation for the generation of vorticity due to fluid density variations, and a vortex adoption algorithm that allows solutions to be carried to much longer times than previous investigations. Since many practical problems require long-time solutions, this new adoption algorithm is a significant step toward making vortex methods applicable to practical problems. Several simulations are described and compared with previous results to validate and show the advantages of the new model. An overview of this project is also included.
Hydrodynamics of maneuvering bodies: LDRD Final Report
NASA Astrophysics Data System (ADS)
Kempka, S. N.; Strickland, J. H.
1994-01-01
The objective of the 'Hydrodynamics of Maneuvering Bodies' LDRD project was to develop a Lagrangian, vorticity-based numerical simulation of the fluid dynamics associated with a maneuvering submarine. Three major tasks were completed. First, a vortex model to simulate the wake behind a maneuvering submarine was completed, assuming the flow to be inviscid and of constant density. Several simulations were performed for a dive maneuver, each requiring less than 20 cpu seconds on a workstation. The technical details of the model and the simulations are described in a separate document, but are reviewed herein. Second, a gridless method to simulate diffusion processes was developed that has significant advantages over previous Lagrangian diffusion models. In this model, viscous diffusion of vorticity is represented by moving vortices at a diffusion velocity, and expanding the vortices as specified by the kinematics for a compressible velocity field. This work was also documented previously and is only reviewed herein. The third major task completed was the development of a vortex model to describe inviscid internal wave phenomena and is the focus of this document. Internal wave phenomena in the stratified ocean can affect an evolving wake and thus, must be considered for naval applications. The vortex model for internal wave phenomena includes a new formulation for the generation of vorticity due to fluid density variations and a vortex adoption algorithm that allows solutions to be carried to much longer times than previous investigations. Since many practical problems require long-time solutions, this new adoption algorithm is a significant step toward making vortex methods applicable to practical problems. Several simulations are described and compared with previous results to validate and show the advantages of the new model. An overview of this project is also included.
Hydrodynamical Simulations of Unevenly Irradiated Jovian Planets
NASA Astrophysics Data System (ADS)
Langton, Jonathan
2007-05-01
We discuss a series of two-dimensional hydrodynamical simulations which model the global time-dependent radiative responses and surface flow patterns of Jovian planets subject to strongly variable atmospheric irradiation. We treat the planetary atmosphere as a thin compressible fluid-layer subject to time-dependent radiative heating and cooling.We consider planets in several environments, including hot Jupiters on circular orbits, short-period planets on eccentric orbits such as HD 118203 b (in which libration effects are important), and planets on highly eccentric orbits. Particular attention is given to HD 80606 b, which has the highest known eccentricity (e=0.932) of any planet. Its orbital period is P=111.4d, and at periastron, it passes within 7 RSun of its parent star. As a result of spin pseudo-synchronization, the rotation period of the planet is expected to be 36.8 hours, allowing the initial conditions for the simulation to determined with confidence. We show that the atmospheric response during the periastron passage of HD 80606 b will likely be observable by the Spitzer Space telescope at all infrared bands. We show that photometric observations taken during periastron passage can determine the effective radiative time constant in the planet's atmosphere. We show that a direct measurement of the radiative time constant can be used to clarify interpretation of infrared observations of other short-period planets. This research has been supported by the NSF through CAREER Grant AST-0449986, and by the NASA Planetary Geology and Geophysics Program through Grant NNG04GK19G.
Early hydrodynamic evolution of a stellar collision
Kushnir, Doron; Katz, Boaz
2014-04-20
The early phase of the hydrodynamic evolution following the collision of two stars is analyzed. Two strong shocks propagate from the contact surface and move toward the center of each star at a velocity that is a small fraction of the velocity of the approaching stars. The shocked region near the contact surface has a planar symmetry and a uniform pressure. The density vanishes at the (Lagrangian) surface of contact, and the speed of sound diverges there. The temperature, however, reaches a finite value, since as the density vanishes, the finite pressure is radiation dominated. For carbon-oxygen white dwarf (CO WD) collisions, this temperature is too low for any appreciable nuclear burning shortly after the collision, which allows for a significant fraction of the mass to be highly compressed to the density required for efficient {sup 56}Ni production in the detonation wave that follows. This property is crucial for the viability of collisions of typical CO WD as progenitors of type Ia supernovae, since otherwise only massive (>0.9 M {sub ☉}) CO WDs would have led to such explosions (as required by all other progenitor models). The divergence of the speed of sound limits numerical studies of stellar collisions, as it makes convergence tests exceedingly expensive unless dedicated schemes are used. We provide a new one-dimensional Lagrangian numerical scheme to achieve this. A self-similar planar solution is derived for zero-impact parameter collisions between two identical stars, under some simplifying assumptions (including a power-law density profile), which is the planar version of previous piston problems that were studied in cylindrical and spherical symmetries.
Forced wetting and hydrodynamic assist
NASA Astrophysics Data System (ADS)
Blake, Terence D.; Fernandez-Toledano, Juan-Carlos; Doyen, Guillaume; De Coninck, Joël
2015-11-01
Wetting is a prerequisite for coating a uniform layer of liquid onto a solid. Wetting failure and air entrainment set the ultimate limit to coating speed. It is well known in the coating art that this limit can be postponed by manipulating the coating flow to generate what has been termed "hydrodynamic assist," but the underlying mechanism is unclear. Experiments have shown that the conditions that postpone air entrainment also reduce the apparent dynamic contact angle, suggesting a direct link, but how the flow might affect the contact angle remains to be established. Here, we use molecular dynamics to compare the outcome of steady forced wetting with previous results for the spontaneous spreading of liquid drops and apply the molecular-kinetic theory of dynamic wetting to rationalize our findings and place them on a quantitative footing. The forced wetting simulations reveal significant slip at the solid-liquid interface and details of the flow immediately adjacent to the moving contact line. Our results confirm that the local, microscopic contact angle is dependent not simply only on the velocity of wetting but also on the nature of the flow that drives it. In particular, they support an earlier suggestion that during forced wetting, an intense shear stress in the vicinity of the contact line can assist surface tension forces in promoting dynamic wetting, thus reducing the velocity-dependence of the contact angle. Hydrodynamic assist then appears as a natural consequence of wetting that emerges when the contact line is driven by a strong and highly confined flow. Our theoretical approach also provides a self-consistent model of molecular slip at the solid-liquid interface that enables its magnitude to be estimated from dynamic contact angle measurements. In addition, the model predicts how hydrodynamic assist and slip may be influenced by liquid viscosity and solid-liquid interactions.
Integer cosine transform for image compression
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Pollara, F.; Shahshahani, M.
1991-01-01
This article describes a recently introduced transform algorithm called the integer cosine transform (ICT), which is used in transform-based data compression schemes. The ICT algorithm requires only integer operations on small integers and at the same time gives a rate-distortion performance comparable to that offered by the floating-point discrete cosine transform (DCT). The article addresses the issue of implementation complexity, which is of prime concern for source coding applications of interest in deep-space communications. Complexity reduction in the transform stage of the compression scheme is particularly relevant, since this stage accounts for most (typically over 80 percent) of the computational load.
Wavelet compression techniques for hyperspectral data
NASA Technical Reports Server (NTRS)
Evans, Bruce; Ringer, Brian; Yeates, Mathew
1994-01-01
Hyperspectral sensors are electro-optic sensors which typically operate in visible and near infrared bands. Their characteristic property is the ability to resolve a relatively large number (i.e., tens to hundreds) of contiguous spectral bands to produce a detailed profile of the electromagnetic spectrum. In contrast, multispectral sensors measure relatively few non-contiguous spectral bands. Like multispectral sensors, hyperspectral sensors are often also imaging sensors, measuring spectra over an array of spatial resolution cells. The data produced may thus be viewed as a three dimensional array of samples in which two dimensions correspond to spatial position and the third to wavelength. Because they multiply the already large storage/transmission bandwidth requirements of conventional digital images, hyperspectral sensors generate formidable torrents of data. Their fine spectral resolution typically results in high redundancy in the spectral dimension, so that hyperspectral data sets are excellent candidates for compression. Although there have been a number of studies of compression algorithms for multispectral data, we are not aware of any published results for hyperspectral data. Three algorithms for hyperspectral data compression are compared. They were selected as representatives of three major approaches for extending conventional lossy image compression techniques to hyperspectral data. The simplest approach treats the data as an ensemble of images and compresses each image independently, ignoring the correlation between spectral bands. The second approach transforms the data to decorrelate the spectral bands, and then compresses the transformed data as a set of independent images. The third approach directly generalizes two-dimensional transform coding by applying a three-dimensional transform as part of the usual transform-quantize-entropy code procedure. The algorithms studied all use the discrete wavelet transform. In the first two cases, a wavelet
On-line structure-lossless digital mammogram image compression
NASA Astrophysics Data System (ADS)
Wang, Jun; Huang, H. K.
1996-04-01
This paper proposes a novel on-line structure lossless compression method for digital mammograms during the film digitization process. The structure-lossless compression segments the breast and the background, compresses the former with a predictive lossless coding method and discards the latter. This compression scheme is carried out during the film digitization process and no additional time is required for the compression. Digital mammograms are compressed on-the-fly while they are created. During digitization, lines of scanned data are first acquired into a small temporary buffer in the scanner, then they are transferred to a large image buffer in an acquisition computer which is connected to the scanner. The compression process, running concurrently with the digitization process in the acquisition computer, constantly checks the image buffer and compresses any newly arrived data. Since compression is faster than digitization, data compression is completed as soon as digitization is finished. On-line compression during digitization does not increase overall digitizing time. Additionally, it reduces the mammogram image size by a factor of 3 to 9 with no loss of information. This algorithm has been implemented in a film digitizer. Statistics were obtained based on digitizing 46 mammograms at four sampling distances from 50 to 200 microns.
Hydrodynamic Synchronisation of Model Microswimmers
NASA Astrophysics Data System (ADS)
Putz, V. B.; Yeomans, J. M.
2009-12-01
We define a model microswimmer with a variable cycle time, thus allowing the possibility of phase locking driven by hydrodynamic interactions between swimmers. We find that, for extensile or contractile swimmers, phase locking does occur, with the relative phase of the two swimmers being, in general, close to 0 or π, depending on their relative position and orientation. We show that, as expected on grounds of symmetry, self T-dual swimmers, which are time-reversal covariant, do not phase-lock. We also discuss the phase behaviour of a line of tethered swimmers, or pumps. These show oscillations in their relative phases reminiscent of the metachronal waves of cilia.
Ergoregion instability: The hydrodynamic vortex
NASA Astrophysics Data System (ADS)
Oliveira, Leandro A.; Cardoso, Vitor; Crispino, Luís C. B.
2014-06-01
Four-dimensional, asymptotically flat spacetimes with an ergoregion but no horizon have been shown to be linearly unstable against a superradiant-triggered mechanism. This result has wide implications in the search for astrophysically viable alternatives to black holes, but also in the understanding of black holes and Hawking evaporation. Here we investigate this instability in detail for a particular setup that can be realized in the laboratory: the hydrodynamic vortex, an effective geometry for sound waves, with ergoregion and without an event horizon.
Hydrodynamic instability modeling for ICF
Haan, S.W.
1993-03-31
The intent of this paper is to review how instability growth is modeled in ICF targets, and to identify the principal issues. Most of the material has been published previously, but is not familiar to a wide audience. Hydrodynamic instabilities are a key issue in ICF. Along with laser-plasma instabilities, they determine the regime in which ignition is possible. At higher laser energies, the same issues determine the achievable gain. Quantitative predictions are therefore of the utmost importance to planning the ICF program, as well as to understanding current Nova results. The key fact that underlies all this work is the stabilization of short wavelengths.
Effective actions for anomalous hydrodynamics
NASA Astrophysics Data System (ADS)
Haehl, Felix M.; Loganayagam, R.; Rangamani, Mukund
2014-03-01
We argue that an effective field theory of local fluid elements captures the constraints on hydrodynamic transport stemming from the presence of quantum anomalies in the underlying microscopic theory. Focussing on global current anomalies for an arbitrary flavour group, we derive the anomalous constitutive relations in arbitrary even dimensions. We demonstrate that our results agree with the constraints on anomaly governed transport derived hitherto using a local version of the second law of thermodynamics. The construction crucially uses the anomaly inflow mechanism and involves a novel thermofield double construction. In particular, we show that the anomalous Ward identities necessitate non-trivial interaction between the two parts of the Schwinger-Keldysh contour.
Hydrodynamic loading of tensegrity structures
NASA Astrophysics Data System (ADS)
Wroldsen, Anders S.; Johansen, Vegar; Skelton, Robert E.; Sørensen, Asgeir J.
2006-03-01
This paper introduces hydrodynamic loads for tensegrity structures, to examine their behavior in marine environments. Wave compliant structures are of general interest when considering large marine structures, and we are motivated by the aquaculture industry where new concepts are investigated in order to make offshore installations for seafood production. This paper adds to the existing models and software simulations of tensegrity structures exposed to environmental loading from waves and current. A number of simulations are run to show behavior of the structure as a function of pretension level and string stiffness for a given loading condition.
Hydrodynamical models of young SNRs.
NASA Astrophysics Data System (ADS)
Kosenko, D. I.; Blinnikov, S. I.; Postnov, K. A.; Sorokina, E. I.
X-ray observations of the Tycho supernova (SN) remnant by XMM-Newton telescope present radial profiles of the remnant in emission lines from silicon and iron \\citep{decour}. To reproduce observed spectrum and X-ray profiles hydrodynamical modelling of the remnant was performed by \\citet{elka}. Standard computational SN models cannot reproduce observed spacial behavoir of the X-ray profiles of the remnant in the emission lines. We perform analysis of these numerical models and find conditions under which it is possible to reproduce observed profiles.
Image compression using wavelet transform and multiresolution decomposition.
Averbuch, A; Lazar, D; Israeli, M
1996-01-01
Schemes for image compression of black-and-white images based on the wavelet transform are presented. The multiresolution nature of the discrete wavelet transform is proven as a powerful tool to represent images decomposed along the vertical and horizontal directions using the pyramidal multiresolution scheme. The wavelet transform decomposes the image into a set of subimages called shapes with different resolutions corresponding to different frequency bands. Hence, different allocations are tested, assuming that details at high resolution and diagonal directions are less visible to the human eye. The resultant coefficients are vector quantized (VQ) using the LGB algorithm. By using an error correction method that approximates the reconstructed coefficients quantization error, we minimize distortion for a given compression rate at low computational cost. Several compression techniques are tested. In the first experiment, several 512x512 images are trained together and common table codes created. Using these tables, the training sequence black-and-white images achieve a compression ratio of 60-65 and a PSNR of 30-33. To investigate the compression on images not part of the training set, many 480x480 images of uncalibrated faces are trained together and yield global tables code. Images of faces outside the training set are compressed and reconstructed using the resulting tables. The compression ratio is 40; PSNRs are 30-36. Images from the training set have similar compression values and quality. Finally, another compression method based on the end vector bit allocation is examined.
Noiseless compression using non-Markov models
NASA Technical Reports Server (NTRS)
Blumer, Anselm
1989-01-01
Adaptive data compression techniques can be viewed as consisting of a model specified by a database common to the encoder and decoder, an encoding rule and a rule for updating the model to ensure that the encoder and decoder always agree on the interpretation of the next transmission. The techniques which fit this framework range from run-length coding, to adaptive Huffman and arithmetic coding, to the string-matching techniques of Lempel and Ziv. The compression obtained by arithmetic coding is dependent on the generality of the source model. For many sources, an independent-letter model is clearly insufficient. Unfortunately, a straightforward implementation of a Markov model requires an amount of space exponential in the number of letters remembered. The Directed Acyclic Word Graph (DAWG) can be constructed in time and space proportional to the text encoded, and can be used to estimate the probabilities required for arithmetic coding based on an amount of memory which varies naturally depending on the encoded text. The tail of that portion of the text which was encoded is the longest suffix that has occurred previously. The frequencies of letters following these previous occurrences can be used to estimate the probability distribution of the next letter. Experimental results indicate that compression is often far better than that obtained using independent-letter models, and sometimes also significantly better than other non-independent techniques.
Microscale hydrodynamics near moving contact lines
NASA Technical Reports Server (NTRS)
Garoff, Stephen; Chen, Q.; Rame, Enrique; Willson, K. R.
1994-01-01
The hydrodynamics governing the fluid motions on a microscopic scale near moving contact lines are different from those governing motion far from the contact line. We explore these unique hydrodynamics by detailed measurement of the shape of a fluid meniscus very close to a moving contact line. The validity of present models of the hydrodynamics near moving contact lines as well as the dynamic wetting characteristics of a family of polymer liquids are discussed.
Type I X-ray burst simulation code
Fisker, J. L.; Hix, W. R.; Liebendoerfer, M.
2007-07-01
dAGILE is an astrophysical code that simulates accretion of matter onto a neutron star and the subsequent x-ray burst. It is a one-dimensional time-dependent spherically symmetric code with generalized nuclear reaction networks, diffusive radiation/conduction, realistic boundary conditions, and general relativistic hydrodynamics. The code is described in more detail in Astrophysical Journal 650(2006)332 and Astrophysical Journal Supplements 174(2008)261.
A comparison of SPH schemes for the compressible Euler equations
NASA Astrophysics Data System (ADS)
Puri, Kunal; Ramachandran, Prabhu
2014-01-01
We review the current state-of-the-art Smoothed Particle Hydrodynamics (SPH) schemes for the compressible Euler equations. We identify three prototypical schemes and apply them to a suite of test problems in one and two dimensions. The schemes are in order, standard SPH with an adaptive density kernel estimation (ADKE) technique introduced Sigalotti et al. (2008) [44], the variational SPH formulation of Price (2012) [33] (referred herein as the MPM scheme) and the Godunov type SPH (GSPH) scheme of Inutsuka (2002) [12]. The tests investigate the accuracy of the inviscid discretizations, shock capturing ability and the particle settling behavior. The schemes are found to produce nearly identical results for the 1D shock tube problems with the MPM and GSPH schemes being the most robust. The ADKE scheme requires parameter values which must be tuned to the problem at hand. We propose an addition of an artificial heating term to the GSPH scheme to eliminate unphysical spikes in the thermal energy at the contact discontinuity. The resulting modification is simple and can be readily incorporated in existing codes. In two dimensions, the differences between the schemes is more evident with the quality of results determined by the particle distribution. In particular, the ADKE scheme shows signs of particle clumping and irregular motion for the 2D strong shock and Sedov point explosion tests. The noise in particle data is linked with the particle distribution which remains regular for the Hamiltonian formulations (MPM and GSPH) and becomes irregular for the ADKE scheme. In the interest of reproducibility, we make available our implementation of the algorithms and test problems discussed in this work.
Thermal transport in a noncommutative hydrodynamics
Geracie, M. Son, D. T.
2015-03-15
We find the hydrodynamic equations of a system of particles constrained to be in the lowest Landau level. We interpret the hydrodynamic theory as a Hamiltonian system with the Poisson brackets between the hydrodynamic variables determined from the noncommutativity of space. We argue that the most general hydrodynamic theory can be obtained from this Hamiltonian system by allowing the Righi-Leduc coefficient to be an arbitrary function of thermodynamic variables. We compute the Righi-Leduc coefficient at high temperatures and show that it satisfies the requirements of particle-hole symmetry, which we outline.
NASA Technical Reports Server (NTRS)
Reif, John H.
1987-01-01
A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.
NASA Astrophysics Data System (ADS)
Alaia, Alessandro; Puppo, Gabriella
2012-06-01
In this work we present a non stationary domain decomposition algorithm for multiscale hydrodynamic-kinetic problems, in which the Knudsen number may span from equilibrium to highly rarefied regimes. Our approach is characterized by using the full Boltzmann equation for the kinetic regime, the Compressible Euler equations for equilibrium, with a buffer zone in which the BGK-ES equation is used to represent the transition between fully kinetic to equilibrium flows. In this fashion, the Boltzmann solver is used only when the collision integral is non-stiff, and the mean free path is of the same order as the mesh size needed to capture variations in macroscopic quantities. Thus, in principle, the same mesh size and time steps can be used in the whole computation. Moreover, the time step is limited only by convective terms. Since the Boltzmann solver is applied only in wholly kinetic regimes, we use the reduced noise DSMC scheme we have proposed in Part I of the present work. This ensures a smooth exchange of information across the different domains, with a natural way to construct interface numerical fluxes. Several tests comparing our hybrid scheme with full Boltzmann DSMC computations show the good agreement between the two solutions, on a wide range of Knudsen numbers.
Active and driven hydrodynamic crystals.
Desreumaux, N; Florent, N; Lauga, E; Bartolo, D
2012-08-01
Motivated by the experimental ability to produce monodisperse particles in microfluidic devices, we study theoretically the hydrodynamic stability of driven and active crystals. We first recall the theoretical tools allowing to quantify the dynamics of elongated particles in a confined fluid. In this regime hydrodynamic interactions between particles arise from a superposition of potential dipolar singularities. We exploit this feature to derive the equations of motion for the particle positions and orientations. After showing that all five planar Bravais lattices are stationary solutions of the equations of motion, we consider separately the case where the particles are passively driven by an external force, and the situation where they are self-propelling. We first demonstrate that phonon modes propagate in driven crystals, which are always marginally stable. The spatial structures of the eigenmodes depend solely on the symmetries of the lattices, and on the orientation of the driving force. For active crystals, the stability of the particle positions and orientations depends not only on the symmetry of the crystals but also on the perturbation wavelengths and on the crystal density. Unlike unconfined fluids, the stability of active crystals is independent of the nature of the propulsion mechanism at the single-particle level. The square and rectangular lattices are found to be linearly unstable at short wavelengths provided the volume fraction of the crystals is high enough. Differently, hexagonal, oblique, and face-centered crystals are always unstable. Our work provides a theoretical basis for future experimental work on flowing microfluidic crystals. PMID:22864543
Hydromechanical transmission with hydrodynamic drive
Orshansky, Jr., deceased, Elias; Weseloh, William E.
1979-01-01
This transmission has a first planetary gear assembly having first input means connected to an input shaft, first output means, and first reaction means, and a second planetary gear assembly having second input means connected to the first input means, second output means, and second reaction means connected directly to the first reaction means by a reaction shaft. First clutch means, when engaged, connect the first output means to an output shaft in a high driving range. A hydrodynamic drive is used; for example, a torque converter, which may or may not have a stationary case, has a pump connected to the second output means, a stator grounded by an overrunning clutch to the case, and a turbine connected to an output member, and may be used in a starting phase. Alternatively, a fluid coupling or other type of hydrodynamic drive may be used. Second clutch means, when engaged, for connecting the output member to the output shaft in a low driving range. A variable-displacement hydraulic unit is mechanically connected to the input shaft, and a fixed-displacement hydraulic unit is mechanically connected to the reaction shaft. The hydraulic units are hydraulically connected together so that when one operates as a pump the other acts as a motor, and vice versa. Both clutch means are connected to the output shaft through a forward-reverse shift arrangement. It is possible to lock out the torque converter after the starting phase is over.
The hydrodynamics of lamprey locomotion
NASA Astrophysics Data System (ADS)
Leftwich, Megan C.
The lamprey, an anguilliform swimmer, propels itself by undulating most of its body. This type of swimming produces flow patterns that are highly three-dimensional in nature and not very well understood. However, substantial previous work has been done to understand two-dimensional unsteady propulsion, the possible wake structures and thrust performance. Limited studies of three-dimensional propulsors with simple geometries have displayed the importance of the third dimension in designing unsteady swimmers. Some of the results of those studies, primarily the ways in which vorticity is organized in the wake region, are seen in lamprey swimming as well. In the current work, the third dimension is not the only important factor, but complex geometry and body undulations also contribute to the hydrodynamics. Through dye flow visualization, particle induced velocimetry and pressure measurements, the hydrodynamics of anguilliform swimming are studied using a custom built robotic lamprey. These studies all indicate that the undulations of the body are not producing thrust. Instead, it is the tail which acts to propel the animal. This conclusion led to further investigation of the tail, specifically the role of varying tail flexibility on hydrodymnamics. It is found that by making the tail more flexible, one decreases the coherence of the vorticity in the lamprey's wake. Additional flexibility also yields less thrust.
Real-Time Digital Compression Of Television Image Data
NASA Technical Reports Server (NTRS)
Barnes, Scott P.; Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1990-01-01
Digital encoding/decoding system compresses color television image data in real time for transmission at lower data rates and, consequently, lower bandwidths. Implements predictive coding process, in which each picture element (pixel) predicted from values of prior neighboring pixels, and coded transmission expresses difference between actual and predicted current values. Combines differential pulse-code modulation process with non-linear, nonadaptive predictor, nonuniform quantizer, and multilevel Huffman encoder.
Experiments of cylindrical isentropic compression by ultrahigh magnetic field
NASA Astrophysics Data System (ADS)
Gu, Zhuowei; Zhou, Zhongyu; Zhang, Chunbo; Tang, Xiaosong; Tong, Yanjin; Zhao, Jianheng; Sun, Chengwei
2015-09-01
The high Explosive Magnetic Flux Implosion Compression Generator (EMFICG) is a kind of unique high energy density dynamic technique with characters like ultrahigh pressure and low temperature rising and could be suitable as a tool of cylindrical isentropic compression. The Institute of Fluid Physics, Chinese Academy of Engineering Physics (IFP, CAEP) have developed EMFICG technique and realized cylindrical isentropic compression. In the experiments, a seed magnetic field of 5-6 Tesla were built first and compressed by a stainless steel liner which is driven by high explosive. The inner free surface velocity of sample was measured by PDV. The isentropic compression of a copper sample was verified and the isentropic pressure is over 100 GPa. The cylindrical isentropic compression process has been numerical simulated by 1D MHD code and the simulation results were compared with the experiments. Compared with the transitional X-ray flash radiograph measurement, this method will probably promote the data accuracy.
ICER-3D Hyperspectral Image Compression Software
NASA Technical Reports Server (NTRS)
Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh
2010-01-01
Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received
NASA Astrophysics Data System (ADS)
Santarius, J. F.
2012-07-01
Converging plasma jets may be able to reach the regime of high energy density plasmas (HEDP). The successful application of plasma jets to magneto-inertial fusion (MIF) would heat the plasma by fusion products and should increase the plasma energy density. This paper reports the results of using the University of Wisconsin's 1-D Lagrangian, radiation-hydrodynamics, fusion code BUCKY to investigate two MIF converging plasma jet test cases originally analyzed by Samulyak et al. [Physics of Plasmas 17, 092702 (2010)]. In these cases, 15 cm or 5 cm radially thick deuterium-tritium (DT) plasma jets merge at 60 cm from the origin and converge radially onto a DT target magnetized to 2 T and of radius 5 cm. The BUCKY calculations reported here model these cases, starting from the time of initial contact of the jets and target. Compared to the one-temperature Samulyak et al. calculations, the one-temperature BUCKY results show similar behavior, except that the plasma radius remains about twice as long near maximum compression. One-temperature and two-temperature BUCKY results differ, reflecting the sensitivity of the calculations to timing and plasma parameter details, with the two-temperature case giving a more sustained compression.
Low Mach number fluctuating hydrodynamics of multispecies liquid mixtures
Donev, Aleksandar Bhattacharjee, Amit Kumar; Nonaka, Andy; Bell, John B.; Garcia, Alejandro L.
2015-03-15
We develop a low Mach number formulation of the hydrodynamic equations describing transport of mass and momentum in a multispecies mixture of incompressible miscible liquids at specified temperature and pressure, which generalizes our prior work on ideal mixtures of ideal gases [Balakrishnan et al., “Fluctuating hydrodynamics of multispecies nonreactive mixtures,” Phys. Rev. E 89 013017 (2014)] and binary liquid mixtures [Donev et al., “Low mach number fluctuating hydrodynamics of diffusively mixing fluids,” Commun. Appl. Math. Comput. Sci. 9(1), 47-105 (2014)]. In this formulation, we combine and extend a number of existing descriptions of multispecies transport available in the literature. The formulation applies to non-ideal mixtures of arbitrary number of species, without the need to single out a “solvent” species, and includes contributions to the diffusive mass flux due to gradients of composition, temperature, and pressure. Momentum transport and advective mass transport are handled using a low Mach number approach that eliminates fast sound waves (pressure fluctuations) from the full compressible system of equations and leads to a quasi-incompressible formulation. Thermal fluctuations are included in our fluctuating hydrodynamics description following the principles of nonequilibrium thermodynamics. We extend the semi-implicit staggered-grid finite-volume numerical method developed in our prior work on binary liquid mixtures [Nonaka et al., “Low mach number fluctuating hydrodynamics of binary liquid mixtures,” http://arxiv.org/abs/1410.2300 (2015)] and use it to study the development of giant nonequilibrium concentration fluctuations in a ternary mixture subjected to a steady concentration gradient. We also numerically study the development of diffusion-driven gravitational instabilities in a ternary mixture and compare our numerical results to recent experimental measurements [Carballido-Landeira et al., “Mixed-mode instability of a
Hydrodynamic models of a Cepheid atmosphere. I - Deep envelope models
NASA Technical Reports Server (NTRS)
Karp, A. H.
1975-01-01
The implicit hydrodynamic code of Kutter and Sparks has been modified to include radiative transfer effects. This modified code has been used to compute deep envelope models of a classical Cepheid with a period of 12 days. It is shown that in this particular model the hydrogen ionization region plays only a small role in producing the observed phase lag between the light and velocity curves. The cause of the bumps on the model's light curve is examined, and a mechanism is presented to explain those Cepheids with two secondary features on their light curves. This mechanism is shown to be consistent with the Hertzsprung sequence only if the evolutionary mass-luminosity law is used.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm. PMID:26671800
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1994-01-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
NASA Astrophysics Data System (ADS)
Noble, Viveca K.
1994-10-01
When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.
Application of CHAD hydrodynamics to shock-wave problems
Trease, H.E.; O`Rourke, P.J.; Sahota, M.S.
1997-12-31
CHAD is the latest in a sequence of continually evolving computer codes written to effectively utilize massively parallel computer architectures and the latest grid generators for unstructured meshes. Its applications range from automotive design issues such as in-cylinder and manifold flows of internal combustion engines, vehicle aerodynamics, underhood cooling and passenger compartment heating, ventilation, and air conditioning to shock hydrodynamics and materials modeling. CHAD solves the full unsteady Navier-Stoke equations with the k-epsilon turbulence model in three space dimensions. The code has four major features that distinguish it from the earlier KIVA code, also developed at Los Alamos. First, it is based on a node-centered, finite-volume method in which, like finite element methods, all fluid variables are located at computational nodes. The computational mesh efficiently and accurately handles all element shapes ranging from tetrahedra to hexahedra. Second, it is written in standard Fortran 90 and relies on automatic domain decomposition and a universal communication library written in standard C and MPI for unstructured grids to effectively exploit distributed-memory parallel architectures. Thus the code is fully portable to a variety of computing platforms such as uniprocessor workstations, symmetric multiprocessors, clusters of workstations, and massively parallel platforms. Third, CHAD utilizes a variable explicit/implicit upwind method for convection that improves computational efficiency in flows that have large velocity Courant number variations due to velocity of mesh size variations. Fourth, CHAD is designed to also simulate shock hydrodynamics involving multimaterial anisotropic behavior under high shear. The authors will discuss CHAD capabilities and show several sample calculations showing the strengths and weaknesses of CHAD.
A moving frame algorithm for high Mach number hydrodynamics
NASA Astrophysics Data System (ADS)
Trac, Hy; Pen, Ue-Li
2004-07-01
We present a new approach to Eulerian computational fluid dynamics that is designed to work at high Mach numbers encountered in astrophysical hydrodynamic simulations. Standard Eulerian schemes that strictly conserve total energy suffer from the high Mach number problem and proposed solutions to additionally solve the entropy or thermal energy still have their limitations. In our approach, the Eulerian conservation equations are solved in an adaptive frame moving with the fluid where Mach numbers are minimized. The moving frame approach uses a velocity decomposition technique to define local kinetic variables while storing the bulk kinetic components in a smoothed background velocity field that is associated with the grid velocity. Gravitationally induced accelerations are added to the grid, thereby minimizing the spurious heating problem encountered in cold gas flows. Separately tracking local and bulk flow components allows thermodynamic variables to be accurately calculated in both subsonic and supersonic regions. A main feature of the algorithm, that is not possible in previous Eulerian implementations, is the ability to resolve shocks and prevent spurious heating where both the pre-shock and post-shock fluid are supersonic. The hybrid algorithm combines the high-resolution shock capturing ability of the second-order accurate Eulerian TVD scheme with a low-diffusion Lagrangian advection scheme. We have implemented a cosmological code where the hydrodynamic evolution of the baryons is captured using the moving frame algorithm while the gravitational evolution of the collisionless dark matter is tracked using a particle-mesh N-body algorithm. Hydrodynamic and cosmological tests are described and results presented. The current code is fast, memory-friendly, and parallelized for shared-memory machines.
Tecolote: An Object-Oriented Framework for Hydrodynamics Physics
Holian, K.S.; Ankeny, L.A.; Clancy, S.P.; Hall, J.H.; Marshall, J.C.; McNamara, G.R.; Painter, J.W.; Zander, M.E.
1997-12-31
Tecolote is an object-oriented framework for both developing and accessing a variety of hydrodynamics models. It is written in C++, and is in turn built on another framework - Parallel Object-Oriented Methods and Applications (POOMA). The Tecolote framework is meant to provide modules (or building blocks) to put together hydrodynamics applications that can encompass a wide variety of physics models, numerical solution options, and underlying data storage schemes, although with only those modules activated at runtime that are necessary. Tecolote has been designed to separate physics from computer science, as much as humanly possible. The POOMA framework provides fields in C++ to Tecolote that are analogous to Fortran-9O-like arrays in the way that they are used, but that, in addition, have underlying load balancing, message passing, and a special scheme for compact data storage. The POOMA fields can also have unique meshes associated with them that can allow more options than just the normal regularly-spaced Cartesian mesh. They also permit one-, two, and three-dimensions to be immediately accessible to the code developer and code user.
Update on Thermal and Hydrodynamic Simulations on LMJ Cryogenic Targets
Moll, G.; Charton, S.
2004-03-15
The temperature of the cryogenic target inside the hohlraum has been studied with a computational fluid dynamics code (FLUENT). Specific models have been developed and used for both thermal and hydrodynamic calculations.With thermal calculations only, we first have found the optimum heat flux required to counteract the effect of the laser entrance windows. This heat flux is centered on the hohlraum wall along the axis of revolution. With this heat flux, the temperature surface profiles of the capsule and the DT ice layer have been significantly reduced. Second, the sensitivity of the target temperature profiles (capsule and DT layer) relatively to capsule displacement has been determined. Thirdly, the effect of the shield extraction (shield surrounding the cryogenic structure) has been studied and has indicated that the target lifetime before the laser shot is less than 1s. Meanwhile, with hydrodynamic simulations, we have investigated the surface temperature profiles alteration due to He and H{sub 2} mixture convection within the hohlraum.In order to find out the variations between different configurations, results of these studies are given with seven significant digit outputs. Those results only indicate a trend because of the material's properties incertitude and the code approximation.
Inelastic response of silicon to shock compression.
Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S
2016-01-01
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature. PMID:27071341
Inelastic response of silicon to shock compression
Higginbotham, Andrew; Stubley, P. G.; Comley, A. J.; Eggert, J. H.; Foster, J. M.; Kalantar, D. H.; McGonegle, D.; Patel, S.; Peacock, L. J.; Rothman, S. D.; et al
2016-04-13
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. Lastly, this modelmore » is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.« less
Inelastic response of silicon to shock compression
Higginbotham, A.; Stubley, P. G.; Comley, A. J.; Eggert, J. H.; Foster, J. M.; Kalantar, D. H.; McGonegle, D.; Patel, S.; Peacock, L. J.; Rothman, S. D.; Smith, R. F.; Suggit, M. J.; Wark, J. S.
2016-01-01
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported ‘anomalous’ elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature. PMID:27071341
Inelastic response of silicon to shock compression.
Higginbotham, A; Stubley, P G; Comley, A J; Eggert, J H; Foster, J M; Kalantar, D H; McGonegle, D; Patel, S; Peacock, L J; Rothman, S D; Smith, R F; Suggit, M J; Wark, J S
2016-01-01
The elastic and inelastic response of [001] oriented silicon to laser compression has been a topic of considerable discussion for well over a decade, yet there has been little progress in understanding the basic behaviour of this apparently simple material. We present experimental x-ray diffraction data showing complex elastic strain profiles in laser compressed samples on nanosecond timescales. We also present molecular dynamics and elasticity code modelling which suggests that a pressure induced phase transition is the cause of the previously reported 'anomalous' elastic waves. Moreover, this interpretation allows for measurement of the kinetic timescales for transition. This model is also discussed in the wider context of reported deformation of silicon to rapid compression in the literature.
Compression of color-mapped images
NASA Technical Reports Server (NTRS)
Hadenfeldt, A. C.; Sayood, Khalid
1992-01-01
In a standard image coding scenario, pixel-to-pixel correlation nearly always exists in the data, especially if the image is a natural scene. This correlation is what allows predictive coding schemes (e.g., DPCM) to perform efficient compression. In a color-mapped image, the values stored in the pixel array are no longer directly related to the pixel intensity. Two color indices which are numerically adjacent (close) may point to two very different colors. The correlation still exists, but only via the colormap. This fact can be exploited by sorting the color map to reintroduce the structure. The sorting of colormaps is studied and it is shown how the resulting structure can be used in both lossless and lossy compression of images.
Renormalization and universality of blowup in hydrodynamic flows.
Mailybaev, Alexei A
2012-06-01
We consider self-similar solutions describing intermittent bursts in shell models of turbulence and study their relationship with blowup phenomena in continuous hydrodynamic models. First, we show that these solutions are very close to self-similar solution for the Fourier transformed inviscid Burgers equation corresponding to shock formation from smooth initial data. Then, the result is generalized to hyperbolic conservation laws in one space dimension describing compressible flows. It is shown that the renormalized wave profile tends to a universal function, which is independent both of initial conditions and of a specific form of the conservation law. This phenomenon can be viewed as a new manifestation of the renormalization group theory. Finally, we discuss possibilities for application of the developed theory for detecting and describing a blowup in incompressible flows.
Unsteady non-Newtonian hydrodynamics in granular gases.
Astillero, Antonio; Santos, Andrés
2012-02-01
The temporal evolution of a dilute granular gas, both in a compressible flow (uniform longitudinal flow) and in an incompressible flow (uniform shear flow), is investigated by means of the direct simulation Monte Carlo method to solve the Boltzmann equation. Emphasis is laid on the identification of a first "kinetic" stage (where the physical properties are strongly dependent on the initial state) subsequently followed by an unsteady "hydrodynamic" stage (where the momentum fluxes are well-defined non-Newtonian functions of the rate of strain). The simulation data are seen to support this two-stage scenario. Furthermore, the rheological functions obtained from simulation are well described by an approximate analytical solution of a model kinetic equation. PMID:22463197
NASA Technical Reports Server (NTRS)
Akkerman, J. W.
1982-01-01
New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.
Relativistic Hydrodynamics for Heavy-Ion Collisions
ERIC Educational Resources Information Center
Ollitrault, Jean-Yves
2008-01-01
Relativistic hydrodynamics is essential to our current understanding of nucleus-nucleus collisions at ultrarelativistic energies (current experiments at the Relativistic Heavy Ion Collider, forthcoming experiments at the CERN Large Hadron Collider). This is an introduction to relativistic hydrodynamics for graduate students. It includes a detailed…
Hydrodynamic description for ballistic annihilation systems
Garcia de Soria, Maria Isabel; Trizac, Emmanuel; Maynar, Pablo; Schehr, Gregory; Barrat, Alain
2009-01-21
The problem of the validity of a hydrodynamic description for a system in which there are no collisional invariants is addressed. Hydrodynamic equations have been derived and successfully tested against simulation data for a system where particles annihilate with a probability p, or collide elastically otherwise. The response of the system to a linear perturbation is analyzed as well.
Combined data encryption and compression using chaos functions
NASA Astrophysics Data System (ADS)
Bose, Ranjan; Pathak, Saumitr
2004-10-01
Past research in the field of cryptography has not given much consideration to arithmetic coding as a feasible encryption technique, with studies proving compression-specific arithmetic coding to be largely unsuitable for encryption. Nevertheless, adaptive modelling, which offers a huge model, variable in structure, and as completely as possible a function of the entire text that has been transmitted since the time the model was initialised, is a suitable candidate for a possible encryption-compression combine. The focus of the work presented in this paper has been to incorporate recent results of chaos theory, proven to be cryptographically secure, into arithmetic coding, to devise a convenient method to make the structure of the model unpredictable and variable in nature, and yet to retain, as far as is possible, statistical harmony, so that compression is possible. A chaos-based adaptive arithmetic coding-encryption technique has been designed, developed and tested and its implementation has been discussed. For typical text files, the proposed encoder gives compression between 67.5% and 70.5%, the zero-order compression suffering by about 6% due to encryption, and is not susceptible to previously carried out attacks on arithmetic coding algorithms.
Comparative Hydrodynamics of Bacterial Polymorphism
NASA Astrophysics Data System (ADS)
Spagnolie, Saverio E.; Lauga, Eric
2011-02-01
Most bacteria swim through fluids by rotating helical flagella which can take one of 12 distinct polymorphic shapes, the most common of which is the normal form used during forward swimming runs. To shed light on the prevalence of the normal form in locomotion, we gather all available experimental measurements of the various polymorphic forms and compute their intrinsic hydrodynamic efficiencies. The normal helical form is found to be the most efficient of the 12 polymorphic forms by a significant margin—a conclusion valid for both the peritrichous and polar flagellar families, and robust to a change in the effective flagellum diameter or length. Hence, although energetic costs of locomotion are small for bacteria, fluid mechanical forces may have played a significant role in the evolution of the flagellum.
Hydrodynamic enhanced dielectrophoretic particle trapping
Miles, Robin R.
2003-12-09
Hydrodynamic enhanced dielectrophoretic particle trapping carried out by introducing a side stream into the main stream to squeeze the fluid containing particles close to the electrodes producing the dielelectrophoretic forces. The region of most effective or the strongest forces in the manipulating fields of the electrodes producing the dielectrophoretic forces is close to the electrodes, within 100 .mu.m from the electrodes. The particle trapping arrangement uses a series of electrodes with an AC field placed between pairs of electrodes, which causes trapping of particles along the edges of the electrodes. By forcing an incoming flow stream containing cells and DNA, for example, close to the electrodes using another flow stream improves the efficiency of the DNA trapping.
Radiation hydrodynamics in solar flares
Fisher, G.H.
1985-10-18
Solar flares are rather violent and extremely complicated phenomena, and it should be made clear at the outset that a physically complete picture describing all aspects of flares does not exist. From the wealth of data which is available, it is apparent that many different types of physical processes are involved during flares: energetic particle acceleration, rapid magnetohydrodynamic motion of complex field structures, magnetic reconnection, violent mass motion along magnetic field lines, and the heating of plasma to tens of millions of degrees, to name a few. The goal of this paper is to explore just one aspect of solar flares, namely, the interaction of hydrodynamics and radiation processes in fluid being rapidly heated along closed magnetic field lines. The models discussed are therefore necessarily restrictive, and will address only a few of the observed or observable phenomena. 46 refs., 6 figs.
Integration of quantum hydrodynamical equation
NASA Astrophysics Data System (ADS)
Ulyanova, Vera G.; Sanin, Andrey L.
2007-04-01
Quantum hydrodynamics equations describing the dynamics of quantum fluid are a subject of this report (QFD).These equations can be used to decide the wide class of problem. But there are the calculated difficulties for the equations, which take place for nonlinear hyperbolic systems. In this connection, It is necessary to impose the additional restrictions which assure the existence and unique of solutions. As test sample, we use the free wave packet and study its behavior at the different initial and boundary conditions. The calculations of wave packet propagation cause in numerical algorithm the division. In numerical algorithm at the calculations of wave packet propagation, there arises the problem of division by zero. To overcome this problem we have to sew together discrete numerical and analytical continuous solutions on the boundary. We demonstrate here for the free wave packet that the numerical solution corresponds to the analytical solution.
Hydrodynamic assembly for Fast Ignition
NASA Astrophysics Data System (ADS)
Tabak, Max; Clark, Daniel; Town, Richard; Hatchett, Stephen
2007-11-01
We present directly and indirectly driven implosion designs for Fast Ignition. Directly driven designs using various laser illumination wavelengths are described. We compare these designs with simple hydrodynamic efficiency models. Capsules illuminated with less than 1 MJ of light with perfect zooming at low intensity and low contrast ratio in power can assemble 4 mg of fuel to column density in excess of 3 g/cm^2. We contrast these designs with more optimized designs that lead to Guderley-style self similar implosions. Indirectly driven capsules absorbing 75 kJ of xrays can assemble 0.7 mg to column density 2.7 g/cm^2 in 1D simulations. We describe 2-D simulations including both capsules and attached cones driven by radiation. We describe issues in assembling fuel near the cone tip and cone disruption.
Hydrodynamic model for drying emulsions.
Feng, Huanhuan; Sprakel, Joris; van der Gucht, Jasper
2015-08-01
We present a hydrodynamic model for film formation in a dense oil-in-water emulsion under a unidirectional drying stress. Water flow through the plateau borders towards the drying end leads to the buildup of a pressure gradient. When the local pressure exceeds the critical disjoining pressure, the water films between droplets break and the droplets coalesce. We show that, depending on the critical pressure and the evaporation rate, the coalescence can occur in two distinct modes. At low critical pressures and low evaporation rates, coalescence occurs throughout the sample, whereas at high critical pressures and high evaporation rate, coalescence occurs only at the front. In the latter case, an oil layer develops on top of the film, which acts as a diffusive barrier and slows down film formation. Our findings, which are summarized in a state diagram for film formation, are in agreement with recent experimental findings.
Anomalous hydrodynamics kicks neutron stars
NASA Astrophysics Data System (ADS)
Kaminski, Matthias; Uhlemann, Christoph F.; Bleicher, Marcus; Schaffner-Bielich, Jürgen
2016-09-01
Observations show that, at the beginning of their existence, neutron stars are accelerated briskly to velocities of up to a thousand kilometers per second. We argue that this remarkable effect can be explained as a manifestation of quantum anomalies on astrophysical scales. To theoretically describe the early stage in the life of neutron stars we use hydrodynamics as a systematic effective-field-theory framework. Within this framework, anomalies of the Standard Model of particle physics as underlying microscopic theory imply the presence of a particular set of transport terms, whose form is completely fixed by theoretical consistency. The resulting chiral transport effects in proto-neutron stars enhance neutrino emission along the internal magnetic field, and the recoil can explain the order of magnitude of the observed kick velocities.
Particle Hydrodynamics with Material Strength for Multi-Layer Orbital Debris Shield Design
NASA Technical Reports Server (NTRS)
Fahrenthold, Eric P.
1999-01-01
Three dimensional simulation of oblique hypervelocity impact on orbital debris shielding places extreme demands on computer resources. Research to date has shown that particle models provide the most accurate and efficient means for computer simulation of shield design problems. In order to employ a particle based modeling approach to the wall plate impact portion of the shield design problem, it is essential that particle codes be augmented to represent strength effects. This report describes augmentation of a Lagrangian particle hydrodynamics code developed by the principal investigator, to include strength effects, allowing for the entire shield impact problem to be represented using a single computer code.
Effect of Surface Roughness on Hydrodynamic Bearings
NASA Technical Reports Server (NTRS)
Majumdar, B. C.; Hamrock, B. J.
1981-01-01
A theoretical analysis on the performance of hydrodynamic oil bearings is made considering surface roughness effect. The hydrodynamic as well as asperity contact load is found. The contact pressure was calculated with the assumption that the surface height distribution was Gaussian. The average Reynolds equation of partially lubricated surface was used to calculate hydrodynamic load. An analytical expression for average gap was found and was introduced to modify the average Reynolds equation. The resulting boundary value problem was then solved numerically by finite difference methods using the method of successive over relaxation. The pressure distribution and hydrodynamic load capacity of plane slider and journal bearings were calculated for various design data. The effects of attitude and roughness of surface on the bearing performance were shown. The results are compared with similar available solution of rough surface bearings. It is shown that: (1) the contribution of contact load is not significant; and (2) the hydrodynamic and contact load increase with surface roughness.
Stellar Explosions: Hydrodynamics and Nucleosynthesis
NASA Astrophysics Data System (ADS)
Jose, Jordi
2016-01-01
Stars are the main factories of element production in the universe through a suite of complex and intertwined physical processes. Such stellar alchemy is driven by multiple nuclear interactions that through eons have transformed the pristine, metal-poor ashes leftover by the Big Bang into a cosmos with 100 distinct chemical species. The products of stellar nucleosynthesis frequently get mixed inside stars by convective transport or through hydrodynamic instabilities, and a fraction of them is eventually ejected into the interstellar medium, thus polluting the cosmos with gas and dust. The study of the physics of the stars and their role as nucleosynthesis factories owes much to cross-fertilization of different, somehow disconnected fields, ranging from observational astronomy, computational astrophysics, and cosmochemistry to experimental and theoretical nuclear physics. Few books have simultaneously addressed the multidisciplinary nature of this field in an engaging way suitable for students and young scientists. Providing the required multidisciplinary background in a coherent way has been the driving force for Stellar Explosions: Hydrodynamics and Nucleosynthesis. Written by a specialist in stellar astrophysics, this book presents a rigorous but accessible treatment of the physics of stellar explosions from a multidisciplinary perspective at the crossroads of computational astrophysics, observational astronomy, cosmochemistry, and nuclear physics. Basic concepts from all these different fields are applied to the study of classical and recurrent novae, type I and II supernovae, X-ray bursts and superbursts, and stellar mergers. The book shows how a multidisciplinary approach has been instrumental in our understanding of nucleosynthesis in stars, particularly during explosive events.
Stellar Explosions: Hydrodynamics and Nucleosynthesis
NASA Astrophysics Data System (ADS)
José, Jordi
2015-12-01
Stars are the main factories of element production in the universe through a suite of complex and intertwined physical processes. Such stellar alchemy is driven by multiple nuclear interactions that through eons have transformed the pristine, metal-poor ashes leftover by the Big Bang into a cosmos with 100 distinct chemical species. The products of stellar nucleosynthesis frequently get mixed inside stars by convective transport or through hydrodynamic instabilities, and a fraction of them is eventually ejected into the interstellar medium, thus polluting the cosmos with gas and dust. The study of the physics of the stars and their role as nucleosynthesis factories owes much to cross-fertilization of different, somehow disconnected fields, ranging from observational astronomy, computational astrophysics, and cosmochemistry to experimental and theoretical nuclear physics. Few books have simultaneously addressed the multidisciplinary nature of this field in an engaging way suitable for students and young scientists. Providing the required multidisciplinary background in a coherent way has been the driving force for Stellar Explosions: Hydrodynamics and Nucleosynthesis. Written by a specialist in stellar astrophysics, this book presents a rigorous but accessible treatment of the physics of stellar explosions from a multidisciplinary perspective at the crossroads of computational astrophysics, observational astronomy, cosmochemistry, and nuclear physics. Basic concepts from all these different fields are applied to the study of classical and recurrent novae, type I and II supernovae, X-ray bursts and superbursts, and stellar mergers. The book shows how a multidisciplinary approach has been instrumental in our understanding of nucleosynthesis in stars, particularly during explosive events.
The hydrodynamics of dolphin drafting
Weihs, Daniel
2004-01-01
Background Drafting in cetaceans is defined as the transfer of forces between individuals without actual physical contact between them. This behavior has long been surmised to explain how young dolphin calves keep up with their rapidly moving mothers. It has recently been observed that a significant number of calves become permanently separated from their mothers during chases by tuna vessels. A study of the hydrodynamics of drafting, initiated in the hope of understanding the mechanisms causing the separation of mothers and calves during fishing-related activities, is reported here. Results Quantitative results are shown for the forces and moments around a pair of unequally sized dolphin-like slender bodies. These include two major effects. First, the so-called Bernoulli suction, which stems from the fact that the local pressure drops in areas of high speed, results in an attractive force between mother and calf. Second is the displacement effect, in which the motion of the mother causes the water in front to move forwards and radially outwards, and water behind the body to move forwards to replace the animal's mass. Thus, the calf can gain a 'free ride' in the forward-moving areas. Utilizing these effects, the neonate can gain up to 90% of the thrust needed to move alongside the mother at speeds of up to 2.4 m/sec. A comparison with observations of eastern spinner dolphins (Stenella longirostris) is presented, showing savings of up to 60% in the thrust that calves require if they are to keep up with their mothers. Conclusions A theoretical analysis, backed by observations of free-swimming dolphin schools, indicates that hydrodynamic interactions with mothers play an important role in enabling dolphin calves to keep up with rapidly moving adult school members. PMID:15132740
Cosmological Structure Formation Shocks and Cosmic Rays in Hydrodynamical Simulations
NASA Astrophysics Data System (ADS)
Pfrommer, C.; Springel, V.; Enβlin, T. A.; Jubelgas, M.
Cosmological shock waves during structure formation not only play a decisive role for the thermalization of gas in virializing structures but also for the acceleration of relativistic cosmic rays (CRs) through diffusive shock acceleration. We discuss a novel numerical treatment of the physics of cosmic rays in combination with a formalism for identifying and measuring the shock strength on-the-fly during a smoothed particle hydrodynamics simulation. In our methodology, the non-thermal CR population is treated self-consistently in order to assess its dynamical impact on the thermal gas as well as other implications on cosmological observables. Using this formalism, we study the history of the thermalization process in high-resolution hydrodynamic simulations of the Lambda cold dark matter model. Collapsed cosmological structures are surrounded by shocks with high Mach numbers up to 1000, but they play only a minor role in the energy balance of thermalization. However, this finding has important consequences for our understanding of the spatial distribution of CRs in the large-scale structure. In high resolution simulations of galaxy clusters, we find a low contribution of the averaged CR pressure, due to the small acceleration efficiency of lower Mach numbers of flow shocks inside halos and the softer adiabatic index of CRs. These effects disfavour CRs when a composite of thermal gas and CRs is adiabatically compressed. However, within cool core regions, the CR pressure reaches equipartition with the thermal pressure leading, to a lower effective adiabatic index and thus to an enhanced compressibility of the central intracluster medium. This effect increases the central density and pressure of the cluster, and thus the resulting X-ray emission and the central Sunyaev-Zel'dovich flux decrement. The integrated Sunyaev-Zel'dovich effect, however, is only slightly changed.
Compression and Predictive Distributions for Large Alphabets
NASA Astrophysics Data System (ADS)
Yang, Xiao
Data generated from large alphabet exist almost everywhere in our life, for example, texts, images and videos. Traditional universal compression algorithms mostly involve small alphabets and assume implicitly an asymptotic condition under which the extra bits induced in the compression process vanishes as an infinite number of data come. In this thesis, we put the main focus on compression and prediction for large alphabets with the alphabet size comparable or larger than the sample size. We first consider sequences of random variables independent and identically generated from a large alphabet. In particular, the size of the sample is allowed to be variable. A product distribution based on Poisson sampling and tiling is proposed as the coding distribution, which highly simplifies the implementation and analysis through independence. Moreover, we characterize the behavior of the coding distribution through a condition on the tail sum of the ordered counts, and apply it to sequences satisfying this condition. Further, we apply this method to envelope classes. This coding distribution provides a convenient method to approximately compute the Shtarkov's normalized maximum likelihood (NML) distribution. And the extra price paid for this convenience is small compared to the total cost. Furthermore, we find this coding distribution can also be used to calculate the NML distribution exactly. And this calculation remains simple due to the independence of the coding distribution. Further, we consider a more realistic class---the Markov class, and in particular, tree sources. A context tree based algorithm is designed to describe the dependencies among the contexts. It is a greedy algorithm which seeks for the greatest savings in codelength when constructing the tree. Compression and prediction of individual counts associated with the contexts uses the same coding distribution as in the i.i.d case. Combining these two procedures, we demonstrate a compression algorithm based
An investigation of dehazing effects on image and video coding.
Gibson, Kristofor B; Võ, Dung T; Nguyen, Truong Q
2012-02-01
This paper makes an investigation of the dehazing effects on image and video coding for surveillance systems. The goal is to achieve good dehazed images and videos at the receiver while sustaining low bitrates (using compression) in the transmission pipeline. At first, this paper proposes a novel method for single-image dehazing, which is used for the investigation. It operates at a faster speed than current methods and can avoid halo effects by using the median operation. We then consider the dehazing effects in compression by investigating the coding artifacts and motion estimation in cases of applying any dehazing method before or after compression. We conclude that better dehazing performance with fewer artifacts and better coding efficiency is achieved when the dehazing is applied before compression. Simulations for Joint Photographers Expert Group images in addition to subjective and objective tests with H.264 compressed sequences validate our conclusion. PMID:21896391
Lossless compression for 3D PET
Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C. ); Baker, K.; Jones, B. )
1994-12-01
A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithms is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an application specific integrated circuit (ASIC) implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine.
Lossless compression for 3D PET
Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C. . Telecommunication Lab.); Baker, K.; Jones, B. )
1994-08-01
A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithm is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an ASIC implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine.
Cramer, S.N.
1984-01-01
The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.
ERIC Educational Resources Information Center
Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien
2013-01-01
This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…
Zingale, M; Howell, L H
2010-03-17
The motivation for this work is to gain experience in the methodology of verification and validation (V&V) of astrophysical radiation hydrodynamics codes. In the first period of this work, we focused on building the infrastructure to test a single astrophysical application code, Castro, developed in collaboration between Lawrence Livermore National Laboratory (LLNL) and Lawrence Berkeley Laboratory (LBL). We delivered several hydrodynamic test problems, in the form of coded initial conditions and documentation for verification, routines to perform data analysis, and a generalized regression test suite to allow for continued automated testing. Astrophysical simulation codes aim to model phenomena that elude direct experimentation. Our only direct information about these systems comes from what we observe, and may be transient. Simulation can help further our understanding by allowing virtual experimentation of these systems. However, to have confidence in our simulations requires us to have confidence in the tools we use. Verification and Validation is a process by which we work to build confidence that a simulation code is accurately representing reality. V&V is a multistep process, and is never really complete. Once a single test problem is working as desired (i.e. that problem is verified), one wants to ensure that subsequent code changes do not break that test. At the same time, one must also search for new verification problems that test the code in a new way. It can be rather tedious to manually retest each of the problems, so before going too far with V&V, it is desirable to have an automated test suite. Our project aims to provide these basic tools for astrophysical radiation hydrodynamics codes.
Arithmetic coding as a non-linear dynamical system
NASA Astrophysics Data System (ADS)
Nagaraj, Nithin; Vaidya, Prabhakar G.; Bhat, Kishor G.
2009-04-01
In order to perform source coding (data compression), we treat messages emitted by independent and identically distributed sources as imprecise measurements (symbolic sequence) of a chaotic, ergodic, Lebesgue measure preserving, non-linear dynamical system known as Generalized Luröth Series (GLS). GLS achieves Shannon's entropy bound and turns out to be a generalization of arithmetic coding, a popular source coding algorithm, used in international compression standards such as JPEG2000 and H.264. We further generalize GLS to piecewise non-linear maps (Skewed-nGLS). We motivate the use of Skewed-nGLS as a framework for joint source coding and encryption.
Coded continuous wave meteor radar
NASA Astrophysics Data System (ADS)
Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter
2016-03-01
The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.
Design Point for a Spheromak Compression Experiment
NASA Astrophysics Data System (ADS)
Woodruff, Simon; Romero-Talamas, Carlos A.; O'Bryan, John; Stuber, James; Darpa Spheromak Team
2015-11-01
Two principal issues for the spheromak concept remain to be addressed experimentally: formation efficiency and confinement scaling. We are therefore developing a design point for a spheromak experiment that will be heated by adiabatic compression, utilizing the CORSICA and NIMROD codes as well as analytic modeling with target parameters R_initial =0.3m, R_final =0.1m, T_initial =0.2keV, T_final =1.8keV, n_initial =1019m-3 and n_final = 1021m-3, with radial convergence of C =3. This low convergence differentiates the concept from MTF with C =10 or more, since the plasma will be held in equilibrium throughout compression. We present results from CORSICA showing the placement of coils and passive structure to ensure stability during compression, and design of the capacitor bank needed to both form the target plasma and compress it. We specify target parameters for the compression in terms of plasma beta, formation efficiency and energy confinement. Work performed under DARPA grant N66001-14-1-4044.
The New CCSDS Image Compression Recommendation
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Armbruster, Philippe; Kiely, Aaron; Masschelein, Bart; Moury, Gilles; Schaefer, Christoph
2005-01-01
The Consultative Committee for Space Data Systems (CCSDS) data compression working group has recently adopted a recommendation for image data compression, with a final release expected in 2005. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near-Earth and deep-space missions. The standard will be accompanied by free software sources on a future web site. An Application-Specific Integrated Circuit (ASIC) implementation of the compressor is currently under development. This paper describes the compression algorithm along with the requirements that drove the selection of the algorithm. Performance results and comparisons with other compressors are given for a test set of space images.
Compression of hyperspectral data for automated analysis
NASA Astrophysics Data System (ADS)
Linderhed, Anna; Wadströmer, Niclas; Stenborg, K.-G.; Nautsch, Harald
2009-09-01
State of the art and coming hyperspectral optical sensors generate large amounts of data and automatic analysis is necessary. One example is Automatic Target Recognition (ATR), frequently used in military applications and a coming technique for civilian surveillance applications. When sensors communicate in networks, the capacity of the communication channel defines the limit of data transferred without compression. Automated analysis may have different demands on data quality than a human observer, and thus standard compression methods may not be optimal. This paper presents results from testing how the performance of detection methods are affected by compressing input data with COTS coders. A standard video coder has been used to compress hyperspectral data. A video is a sequence of still images, a hybrid video coder use the correlation in time by doing block based motion compensated prediction between images. In principle only the differences are transmitted. This method of coding can be used on hyperspectral data if we consider one of the three dimensions as the time axis. Spectral anomaly detection is used as detection method on mine data. This method finds every pixel in the image that is abnormal, an anomaly compared to the surroundings. The purpose of anomaly detection is to identify objects (samples, pixels) that differ significantly from the background, without any a priori explicit knowledge about the signature of the sought-after targets. Thus the role of the anomaly detector is to identify "hot spots" on which subsequent analysis can be performed. We have used data from Imspec, a hyperspectral sensor. The hyperspectral image, or the spectral cube, consists of consecutive frames of spatial-spectral images. Each pixel contains a spectrum with 240 measure points. Hyperspectral sensor data was coded with hybrid coding using a variant of MPEG2. Only I- and P- frames was used. Every 10th frame was coded as I frame. 14 hyperspectral images was coded in 3
HEMP. Hydrodynamic Elastic Magneto Plastic
Wilkins, M.L.; Levatin, J.A.
1985-02-01
The HEMP code solves the conservation equations of two-dimensional elastic-plastic flow, in plane x-y coordinates or in cylindrical symmetry around the x-axis. Provisions for calculation of fixed boundaries, free surfaces, pistons, and boundary slide planes have been included, along with other special conditions.
Modeling Compressed Turbulence
Israel, Daniel M.
2012-07-13
From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.
Local compressibilities in crystals
NASA Astrophysics Data System (ADS)
Martín Pendás, A.; Costales, Aurora; Blanco, M. A.; Recio, J. M.; Luaña, Víctor
2000-12-01
An application of the atoms in molecules theory to the partitioning of static thermodynamic properties in condensed systems is presented. Attention is focused on the definition and the behavior of atomic compressibilities. Inverses of bulk moduli are found to be simple weighted averages of atomic compressibilities. Two kinds of systems are investigated as examples: four related oxide spinels and the alkali halide family. Our analyses show that the puzzling constancy of the bulk moduli of these spinels is a consequence of the value of the compressibility of an oxide ion. A functional dependence between ionic bulk moduli and ionic volume is also proposed.
Proposed generation and compression of a target plasma for MTF
Kirkpatrick, R.C.; Thurston, R.S.; Chrien, R.E.
1995-09-01
Magnetized target fusion (MTF), in which a magnetothermally insulated plasma is hydrodynamically compressed to fusion conditions, represents an approach to controlled fusion which avoids difficulties of both traditional inertial confinement and magnetic confinement approaches. The authors are proposing to demonstrate the feasibility of magnetized target fusion by: (1) creating a suitable magnetized target plasma, (2) performing preliminary liner compression experiments using existing pulsed power facilities and demonstrated liner performance. Once the target plasma and the means for its generation have been optimized, the authors plan to conduct preliminary liner compression experiments aimed at demonstrating the near-adiabatic compression of the target plasma desired for MTF. Relevant liner compression experiments have been performed at Los Alamos in the Scyllac Fast Liner Program and, more recently, in the Pegasus facility and the Procyon explosive pulsed power program. In a series of liner experiments they plan to map out the dependence of temperature and neutron production as functions of the initial plasma conditions and the liner compression achieved. With the above research program, they intend to demonstrate most of the key principles involved in magnetized target fusion, and develop the experimental and theoretical tools needed to design and execute fully integrated MTF ignition experiments.