Science.gov

Sample records for adaptive grid code

  1. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  2. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  3. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  4. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  5. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  6. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  7. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  8. The development and application of the self-adaptive grid code, SAGE

    NASA Astrophysics Data System (ADS)

    Davies, Carol B.

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  9. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  10. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  11. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  12. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  13. Grid quality improvement by a grid adaptation technique

    NASA Technical Reports Server (NTRS)

    Lee, K. D.; Henderson, T. L.; Choo, Y. K.

    1991-01-01

    A grid adaptation technique is presented which improves grid quality. The method begins with an assessment of grid quality by defining an appropriate grid quality measure. Then, undesirable grid properties are eliminated by a grid-quality-adaptive grid generation procedure. The same concept has been used for geometry-adaptive and solution-adaptive grid generation. The difference lies in the definition of the grid control sources; here, they are extracted from the distribution of a particular grid property. Several examples are presented to demonstrate the versatility and effectiveness of the method.

  14. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  15. LAPS Grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Pagliantini, Cecilia; Delzanno, Gia Luca; Guo, Zehua; Srinivasan, Bhuvana; Tang, Xianzhu; Chacon, Luis

    2011-10-01

    LAPS uses a common-data framework in which a general purpose grid generation and adaptation package in toroidal and simply connected domains is implemented. The initial focus is on implementing the Winslow/Laplace-Beltrami method for generating non-overlapping block structured grids. This is to be followed by a grid adaptation scheme based on Monge-Kantorovich optimal transport method [Delzanno et al., J. Comput. Phys,227 (2008), 9841-9864], that equidistributes application-specified error. As an initial set of applications, we will lay out grids for an axisymmetric mirror, a field reversed configuration, and an entire poloidal cross section of a tokamak plasma reconstructed from a CMOD experimental shot. These grids will then be used for computing the plasma equilibrium and transport in accompanying presentations. A key issue for Monge-Kantorovich grid optimization is the choice of error or monitor function for equi-distribution. We will compare the Operator Recovery Error Source Detector (ORESD) [Lapenta, Int. J. Num. Meth. Eng,59 (2004) 2065-2087], the Tau method and a strategy based on the grid coarsening [Zhang et al., AIAA J,39 (2001) 1706-1715] to find an ``optimal'' grid. Work supported by DOE OFES.

  16. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  17. Growth and evolution of small porous icy bodies with an adaptive-grid thermal evolution code. I. Application to Kuiper belt objects and Enceladus

    NASA Astrophysics Data System (ADS)

    Prialnik, Dina; Merk, Rainer

    2008-09-01

    We present a new 1-dimensional thermal evolution code suited for small icy bodies of the Solar System, based on modern adaptive grid numerical techniques, and suited for multiphase flow through a porous medium. The code is used for evolutionary calculations spanning 4.6×10 yr of a growing body made of ice and rock, starting with a 10 km radius seed and ending with an object 250 km in radius. Initial conditions are chosen to match two different classes of objects: a Kuiper belt object, and Saturn's moon Enceladus. Heating by the decay of 26Al, as well as long-lived radionuclides is taken into account. Several values of the thermal conductivity and accretion laws are tested. We find that in all cases the melting point of ice is reached in a central core. Evaporation and flow of water and vapor gradually remove the water from the core and the final (present) structure is differentiated, with a rocky, highly porous core of 80 km radius (and up to 160 km for very low conductivities). Outside the core, due to refreezing of water and vapor, a compact, ice-rich layer forms, a few tens of km thick (except in the case of very high conductivity). If the ice is initially amorphous, as expected in the Kuiper belt, the amorphous ice is preserved in an outer layer about 20 km thick. We conclude by suggesting various ways in which the code may be extended.

  18. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  19. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  20. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  1. Interactive solution-adaptive grid generation

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Henderson, Todd L.

    1992-01-01

    TURBO-AD is an interactive solution-adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution-adaptive grid generation technique into a single interactive solution-adaptive grid generation package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties that had been encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on a unit square in the parametric domain, and the new adapted grid in the parametric domain is then mapped back to the physical domain. The grid adaptation is achieved by first adapting the control points to a numerical solution in the parametric domain using control sources obtained from flow properties. Then a new modified grid is generated from the adapted control net. This solution-adaptive grid generation process is efficient because the number of control points is much less than the number of grid points and the generation of a new grid from the adapted control net is an efficient algebraic process. TURBO-AD provides the user with both local and global grid controls.

  2. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  3. IGB grid: User's manual (A turbomachinery grid generation code)

    NASA Technical Reports Server (NTRS)

    Beach, T. A.; Hoffman, G.

    1992-01-01

    A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.

  4. Interactive solution-adaptive grid generation procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.

    1992-01-01

    TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.

  5. An adaptive grid with directional control

    NASA Technical Reports Server (NTRS)

    Brackbill, J. U.

    1993-01-01

    An adaptive grid generator for adaptive node movement is here derived by combining a variational formulation of Winslow's (1981) variable-diffusion method with a directional control functional. By applying harmonic-function theory, it becomes possible to define conditions under which there exist unique solutions of the resulting elliptic equations. The results obtained for the grid generator's application to the complex problem posed by the fluid instability-driven magnetic field reconnection demonstrate one-tenth the computational cost of either a Eulerian grid or an adaptive grid without directional control.

  6. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  7. The fundamentals of adaptive grid movement

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1990-01-01

    Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.

  8. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  9. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  10. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  11. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  12. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  13. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  14. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  15. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  16. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  17. Dynamic Load Balancing for Adaptive Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.

  18. Adaptive refinement tools for tetrahedral unstructured grids

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)

    2011-01-01

    An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.

  19. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  20. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  1. Load Balancing Sequences of Unstructured Adaptive Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured grid computations but causes load imbalance on multiprocessor systems. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. This paper makes several important additions to our previous work. First, a new remapping cost model is presented and empirically validated on an SP2. Next, our load balancing strategy is applied to sequences of dynamically adapted unstructured grids. Results indicate that our framework is effective on many processors for both steady and unsteady problems with several levels of adaption. Additionally, we demonstrate that a coarse starting mesh produces high quality load balancing, at a fraction of the cost required for a fine initial mesh. Finally, we show that the data remapping overhead can be significantly reduced by applying our heuristic processor reassignment algorithm.

  2. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  3. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  4. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  5. Fast transport simulation with an adaptive grid refinement.

    PubMed

    Haefner, Frieder; Boy, Siegrun

    2003-01-01

    One of the main difficulties in transport modeling and calibration is the extraordinarily long computing times necessary for simulation runs. Improved execution time is a prerequisite for calibration in transport modeling. In this paper we investigate the problem of code acceleration using an adaptive grid refinement, neglecting subdomains, and devising a method by which the Courant condition can be ignored while maintaining accurate solutions. Grid refinement is based on dividing selected cells into regular subcells and including the balance equations of subcells in the equation system. The connection of coarse and refined cells satisfies the mass balance with an interpolation scheme that is implicitly included in the equation system. The refined subdomain can move with the average transport velocity of the subdomain. Very small time steps are required on a fine or a refined grid, because of the combined effect of the Courant and Peclet conditions. Therefore, we have developed a special upwind technique in small grid cells with high velocities (velocity suppression). We have neglected grid subdomains with very small concentration gradients (zero suppression). The resulting software, MODCALIF, is a three-dimensional, modularly constructed FORTRAN code. For convenience, the package names used by the well-known MODFLOW and MT3D computer programs are adopted, and the same input file structure and format is used, but the program presented here is separate and independent. Also, MODCALIF includes algorithms for variable density modeling and model calibration. The method is tested by comparison with an analytical solution, and illustrated by means of a two-dimensional theoretical example and three-dimensional simulations of the variable-density Cape Cod and SALTPOOL experiments. Crossing from fine to coarse grid produces numerical dispersion when the whole subdomain of interest is refined; however, we show that accurate solutions can be obtained using a fraction of the

  6. An Adaptive VOF Method on Unstructured Grid

    NASA Astrophysics Data System (ADS)

    Wu, L. L.; Huang, M.; Chen, B.

    2011-09-01

    In order to improve the accuracy of interface capturing and keeping the computational efficiency, an adaptive VOF method on unstructured grid is proposed in this paper. The volume fraction in each cell is regarded as the criterion to locally refine the interface cell. With the movement of interface, new interface cells (0 ≤ f ≤ 1) are subdivided into child cells, while those child cells that no longer contain interface will be merged back into the original parent cell. In order to avoid the complicated redistribution of volume fraction during the subdivision and amalgamation procedure, a predictor-corrector algorithm is proposed to implement the subdivision and amalgamation procedures only in empty or full cell ( f = 0 or 1). Thus volume fraction in the new cell can take the value from the original cell directly, and the interpolation of the interface is avoided. The advantage of this method is that the re-generation of the whole grid system is not necessary, so its implementation is very efficient. Moreover, an advection flow test of a hollow square was performed, and the relative shape error of the result obtained by adaptive mesh is smaller than those by non-refined grid, which verifies the validation of our method.

  7. TIGGERC: Turbomachinery interactive grid generator energy distributor and restart code

    NASA Technical Reports Server (NTRS)

    Miller, David P.

    1992-01-01

    A two dimensional multi-block grid generator was developed for a new design and analysis system for studying multi-blade row turbomachinery problems with an axisymmetric viscous/inviscid 'average passage' through flow code. TIGGERC is a mouse driven, fully interactive grid generation program which can be used to modify boundary coordinates and grid packing. TIGGERC generates grids using a hyperbolic tangent or algebraic distribution of grid points on the block boundaries and the interior points of each block grid are distributed using a transfinite interpolation approach. TIGGERC generates a blocked axisymmetric H grid, C grid, I grid, or O grid for studying turbomachinery flow problems. TIGGERC was developed for operation on small high speed graphic workstations.

  8. OMEGA: The operational multiscale environment model with grid adaptivity

    SciTech Connect

    Bacon, D.P.

    1995-07-01

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.

  9. Shape optimization including finite element grid adaptation

    NASA Technical Reports Server (NTRS)

    Kikuchi, N.; Taylor, J. E.

    1984-01-01

    The prediction of optimal shape design for structures depends on having a sufficient level of precision in the computation of structural response. These requirements become critical in situations where the region to be designed includes stress concentrations or unilateral contact surfaces, for example. In the approach to shape optimization discussed here, a means to obtain grid adaptation is incorporated into the finite element procedures. This facility makes it possible to maintain a level of quality in the computational estimate of response that is surely adequate for the shape design problem.

  10. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  11. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  12. Conservative Smoothing on an Adaptive Quadrilateral Grid

    NASA Astrophysics Data System (ADS)

    Sun, M.; Takayama, K.

    1999-03-01

    The Lax-Wendroff scheme can be freed of spurious oscillations by introducing conservative smoothing. In this paper the approach is first tested in 1-D modeling equations and then extended to multidimensional flows by the finite volume method. The scheme is discretized by a space-splitting method on an adaptive quadrilateral grid. The artificial viscosity coefficients in the conservative smoothing step are specially designed to capture slipstreams and vortices. Algorithms are programmed using a vectorizable data structure, under which not only the flow solver but also the adaptation procedure is well vectorized. The good resolution and high efficiency of the approach are demonstrated in calculating both unsteady and steady compressible flows with either weak or strong shock waves.

  13. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  14. A Lagrangian-Eulerian finite element method with adaptive gridding for advection-dispersion problems

    SciTech Connect

    Ijiri, Y.; Karasaki, K.

    1994-02-01

    In the present paper, a Lagrangian-Eulerian finite element method with adaptive gridding for solving advection-dispersion equations is described. The code creates new grid points in the vicinity of sharp fronts at every time step in order to reduce numerical dispersion. The code yields quite accurate solutions for a wide range of mesh Peclet numbers and for mesh Courant numbers well in excess of 1.

  15. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockhard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  16. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  17. Dynamic mesh adaption for triangular and tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.

  18. Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement

    SciTech Connect

    Anninos, P; Fragile, P C; Salmonson, J D

    2005-05-06

    A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.

  19. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  20. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  1. 3D Structured Grid Generation Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Loellbach, James; Tsung, Fu-Lin

    1999-01-01

    This report describes the research tasks during the past year. The research was mainly in the area of computational grid generation in support of CFD analyses of turbomachinery components. In addition to the grid generation work, a numerical simulation was obtained for the flow through a centrifugal gas compressor using an unstructured Navier-Stokes solver. Other tasks involved many different turbomachinery component analyses. These analyses were performed for NASA projects or for industrial applications. The work includes both centrifugal and axial machines, single and multiple blade rows, and steady and unsteady analyses. Over the past five years, a set of structured grid generation codes were developed that allow grids to be obtained fairly quickly for the large majority of configurations we encounter. These codes do not comprise a generalized grid generation package; they are noninteractive codes specifically designed for turbomachinery blade row geometries. But because of this limited scope, the codes are small, fast, and portable, and they can be run in the batch mode on small workstations. During the past year, these programs were used to generate computational grids were modified for a wide variety of configurations. In particular, the codes or wrote supplementary codes to improve our grid generation capabilities for multiple blade row configurations. This involves generating separate grids for each blade row, and then making them match and overlap by a few grid points at their common interface so that fluid properties are communicated across the interface. Unsteady rotor/stator analyses were performed for an axial turbine, a centrifugal compressor, and a centrifugal pump. Steady-state single-blade-row analyses were made for a study of blade sweep in transonic compressors. There was also cooperation on the application of an unstructured Navier-Stokes solver for turbomachinery flow simulations. In particular, the unstructured solver was used to analyze the

  2. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  3. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  4. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  5. Euler Technology Assessment program for preliminary aircraft design employing SPLITFLOW code with Cartesian unstructured grid method

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.

    1995-01-01

    This report documents results from the Euler Technology Assessment program. The objective was to evaluate the efficacy of Euler computational fluid dynamics (CFD) codes for use in preliminary aircraft design. Both the accuracy of the predictions and the rapidity of calculations were to be assessed. This portion of the study was conducted by Lockheed Fort Worth Company, using a recently developed in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages for this study, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaptation of the volume grid during the solution convergence to resolve high-gradient flow regions. This proved beneficial in resolving the large vortical structures in the flow for several configurations examined in the present study. The SPLITFLOW code predictions of the configuration forces and moments are shown to be adequate for preliminary design analysis, including predictions of sideslip effects and the effects of geometry variations at low and high angles of attack. The time required to generate the results from initial surface definition is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  6. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  7. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  8. Solving Fluid Flow Problems on Moving and Adaptive Overlapping Grids

    SciTech Connect

    Henshaw, W

    2005-07-28

    Solution of fluid dynamics problems on overlapping grids will be discussed. An overlapping grid consists of a set of structured component grids that cover a domain and overlap where they meet. Overlapping grids provide an effective approach for developing efficient and accurate approximations for complex, possibly moving geometry. Topics to be addressed include the reactive Euler equations, the incompressible Navier-Stokes equations and elliptic equations solved with a multigrid algorithm. Recent developments coupling moving grids and adaptive mesh refinement and preliminary parallel results will also be presented.

  9. An interactive grid generator for TOUGH family code

    2004-01-09

    WinGridder has been developed for designing, generating, and visualizing (at various spatial scales) numerical grids used in reservoir simulations and groundwater modeling studies. It can save mesh files for TOUGH family codes and output additional grid information for various purposes in either graphic format or plain text format, many important features, such as inclined faults and offset, layering structure, local refinements, and embedded engineering structures, can be represented in the grid. The main advantages ofmore » this grid-generation software are its user friendly graphical interfaces, flexible grid design capabilities, efficient grid generation, and powerful searching and post-processing capability, especially for large size grid (e.g., a grid of million grid cells). The main improvements of the version 2.0 are (1) to add a capability of handling a repository with multiple sub-regions and specified drifts, (2) to use an interpolation method, instead of picking the nearest point, in calculating the geological data from the given digital geological model, and (3) enhanced searching and other capability.« less

  10. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  11. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  12. Adaptive grid embedding for the two-dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Warren, Gary P.

    1990-01-01

    A numerical algorithm is presented for solving the two-dimensional flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for an NACA 0012 airfoil in a freestream with Mach numbers of 0.95 and 1.054. Excellent resolution of the shock structures is obtained with the adaptive grid embedding method with significantly fewer grid points than the comparable structured grid.

  13. Development of a dynamically adaptive grid method for multidimensional problems

    NASA Astrophysics Data System (ADS)

    Holcomb, J. E.; Hindman, R. G.

    1984-06-01

    An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.

  14. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundaray interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  15. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  16. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  17. Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1991-01-01

    A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.

  18. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  19. Topology and grid adaption for high-speed flow computations

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1989-01-01

    This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  20. Adaptive grid generation in a patient-specific cerebral aneurysm.

    PubMed

    Hodis, Simona; Kallmes, David F; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  1. Adaptive grid generation in a patient-specific cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  2. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  3. Variational method for adaptive grid generation

    SciTech Connect

    Brackbill, J.U.

    1983-01-01

    A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.

  4. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  5. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  6. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  7. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  8. Fast adaptive composite grid methods on distributed parallel architectures

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Quinlan, Daniel

    1992-01-01

    The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.

  9. Adaptive hybrid prismatic-tetrahedral grids for viscous flows

    NASA Astrophysics Data System (ADS)

    Kallinderis, Yannis; Khawaja, Aly; McMorris, Harlan

    1995-03-01

    The paper presents generation of adaptive hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is an Automatic Receding Method (ARM) for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples division of tetrahedra, as well as 2-D directional division of prisms.

  10. Adaptive hybrid prismatic-tetrahedral grids for viscous flows

    NASA Technical Reports Server (NTRS)

    Kallinderis, Yannis; Khawaja, Aly; Mcmorris, Harlan

    1995-01-01

    The paper presents generation of adaptive hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is an Automatic Receding Method (ARM) for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples division of tetrahedra, as well as 2-D directional division of prisms.

  11. A novel hyperbolic grid generation procedure with inherent adaptive dissipation

    SciTech Connect

    Tai, C.H.; Yin, S.L.; Soong, C.Y.

    1995-01-01

    This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.

  12. The emergence of grid cells: Intelligent design or just adaptation?

    PubMed

    Kropff, Emilio; Treves, Alessandro

    2008-01-01

    Individual medial entorhinal cortex (mEC) 'grid' cells provide a representation of space that appears to be essentially invariant across environments, modulo simple transformations, in contrast to multiple, rapidly acquired hippocampal maps; it may therefore be established gradually during rodent development. We explore with a simplified mathematical model the possibility that the self-organization of multiple grid fields into a triangular grid pattern may be a single-cell process, driven by firing rate adaptation and slowly varying spatial inputs. A simple analytical derivation indicates that triangular grids are favored asymptotic states of the self-organizing system, and computer simulations confirm that such states are indeed reached during a model learning process, provided it is sufficiently slow to effectively average out fluctuations. The interactions among local ensembles of grid units serve solely to stabilize a common grid orientation. Spatial information, in the real mEC network, may be provided by any combination of feedforward cortical afferents and feedback hippocampal projections from place cells, since either input alone is likely sufficient to yield grid fields. PMID:19021261

  13. RHALE: A 3-D MMALE code for unstructured grids

    SciTech Connect

    Peery, J.S.; Budge, K.G.; Wong, M.K.W.; Trucano, T.G.

    1993-08-01

    This paper describes RHALE, a multi-material arbitrary Lagrangian-Eulerian (MMALE) shock physics code. RHALE is the successor to CTH, Sandia`s 3-D Eulerian shock physics code, and will be capable of solving problems that CTH cannot adequately address. We discuss the Lagrangian solid mechanics capabilities of RHALE, which include arbitrary mesh connectivity, superior artificial viscosity, and improved material models. We discuss the MMALE algorithms that have been extended for arbitrary grids in both two- and three-dimensions. The MMALE addition to RHALE provides the accuracy of a Lagrangian code while allowing a calculation to proceed under very large material distortions. Coupling an arbitrary quadrilateral or hexahedral grid to the MMALE solution facilitates modeling of complex shapes with a greatly reduced number of computational cells. RHALE allows regions of a problem to be modeled with Lagrangian, Eulerian or ALE meshes. In addition, regions can switch from Lagrangian to ALE to Eulerian based on user input or mesh distortion. For ALE meshes, new node locations are determined with a variety of element based equipotential schemes. Element quantities are advected with donor, van Leer, or Super-B algorithms. Nodal quantities are advected with the second order SHALE or HIS algorithms. Material interfaces are determined with a modified Young`s high resolution interface tracker or the SLIC algorithm. RHALE has been used to model many problems of interest to the mechanics, hypervelocity impact, and shock physics communities. Results of a sampling of these problems are presented in this paper.

  14. Self-Avoiding Walks Over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.

  15. Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.

  16. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  17. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  18. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  19. Digital breast tomosynthesis reconstruction with an adaptive voxel grid

    NASA Astrophysics Data System (ADS)

    Claus, Bernhard; Chan, Heang-Ping

    2014-03-01

    In digital breast tomosynthesis (DBT) volume datasets are typically reconstructed with an anisotropic voxel size, where the in-plane voxel size usually reflects the detector pixel size (e.g., 0.1 mm), and the slice separation is generally between 0.5-1.0 mm. Increasing the tomographic angle is expected to give better 3D image quality; however, the slice spacing in the reconstruction should be reduced, otherwise one may risk losing fine-scale image detail (e.g., small microcalcifications). An alternative strategy consists of reconstructing on an adaptive voxel grid, where the voxel height at each location is adapted based on the backprojected data at this location, with the goal to improve image quality for microcalcifications. In this paper we present an approach for generating such an adaptive voxel grid. This approach is based on an initial reconstruction step that is performed at a finer slice-spacing combined with a selection of an "optimal" height for each voxel. This initial step is followed by a (potentially iterative) reconstruction acting now on the adaptive grid only.

  20. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  1. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  2. Euler Technology Assessment for Preliminary Aircraft Design: Compressibility Predictions by Employing the Cartesian Unstructured Grid SPLITFLOW Code

    NASA Technical Reports Server (NTRS)

    Finley, Dennis B.; Karman, Steve L., Jr.

    1996-01-01

    The objective of the second phase of the Euler Technology Assessment program was to evaluate the ability of Euler computational fluid dynamics codes to predict compressible flow effects over a generic fighter wind tunnel model. This portion of the study was conducted by Lockheed Martin Tactical Aircraft Systems, using an in-house Cartesian-grid code called SPLITFLOW. The Cartesian grid technique offers several advantages, including ease of volume grid generation and reduced number of cells compared to other grid schemes. SPLITFLOW also includes grid adaption of the volume grid during the solution to resolve high-gradient regions. The SPLITFLOW code predictions of configuration forces and moments are shown to be adequate for preliminary design, including predictions of sideslip effects and the effects of geometry variations at low and high angles-of-attack. The transonic pressure prediction capabilities of SPLITFLOW are shown to be improved over subsonic comparisons. The time required to generate the results from initial surface data is on the order of several hours, including grid generation, which is compatible with the needs of the design environment.

  3. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  4. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  5. Large-Scale Liquid Simulation on Adaptive Hexahedral Grids.

    PubMed

    Ferstl, Florian; Westermann, Rudiger; Dick, Christian

    2014-10-01

    Regular grids are attractive for numerical fluid simulations because they give rise to efficient computational kernels. However, for simulating high resolution effects in complicated domains they are only of limited suitability due to memory constraints. In this paper we present a method for liquid simulation on an adaptive octree grid using a hexahedral finite element discretization, which reduces memory requirements by coarsening the elements in the interior of the liquid body. To impose free surface boundary conditions with second order accuracy, we incorporate a particular class of Nitsche methods enforcing the Dirichlet boundary conditions for the pressure in a variational sense. We then show how to construct a multigrid hierarchy from the adaptive octree grid, so that a time efficient geometric multigrid solver can be used. To improve solver convergence, we propose a special treatment of liquid boundaries via composite finite elements at coarser scales. We demonstrate the effectiveness of our method for liquid simulations that would require hundreds of millions of simulation elements in a non-adaptive regime. PMID:26357387

  6. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  7. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  8. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  9. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  10. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  11. Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, Gary Patrick

    1990-01-01

    A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.

  12. Automation of assertion testing - Grid and adaptive techniques

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.

    1985-01-01

    Assertions can be used to automate the process of testing software. Two methods for automating the generation of input test data are described in this paper. One method selects the input values of variables at regular intervals in a 'grid'. The other, adaptive testing, uses assertion violations as a measure of errors detected and generates new test cases based on test results. The important features of assertion testing are that: it can be used throughout the entire testing cycle; it provides automatic notification of error conditions; and it can be used with automatic input generation techniques which eliminate the subjectivity in choosing test data.

  13. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  14. The Quantum Workings of the Rotating 64-Grid Genetic Code

    PubMed Central

    Castro-Chavez, Fernando

    2011-01-01

    In this article, the pattern learned from the classic or conventional rotating circular genetic code is transferred to a 64-grid model. In this non-static representation, the codons for the same amino acid within each quadrant could be exchanged, wobbling or rotating in a quantic way similar to the electrons within an atomic orbit. Represented in this 64-grid format are the three rules of variation encompassing 4, 2, or 1 quadrant, respectively: 1) same position in four quadrants for the essential hydrophobic amino acids that have U at the center, 2) same or contiguous position for the same or related amino acids in two quadrants, and 3) equivalent amino acids within one quadrant. Also represented is the mathematical balance of the odd and even codons, and the most used codons per amino acid in humans compared to one diametrically opposed organism: the plant Arabidopsis thaliana, a comparison that depicts the difference in third nucleotide preferences: a C/U exchange for 11 amino acids, a G/A and a G/U exchange for 2 amino acids, respectively, and a C/A exchange for one amino acid; by studying these codon usage preferences per amino acid we present our two hypotheses: 1) A slower translation in vertebrates and 2) a faster translation in invertebrates, possibly due to the aqueous environments where they live. These codon usage preferences may also be able to determine genomic compatibility by comparing individual mRNAs and their functional third dimensional structure, transport and translation within cells and organisms. These observations are aimed to the design of bioinformatics computational tools to compare human genomes and to determine the exchange between compatible codons and amino acids, to preserve and/or to bring back extinct biodiversity, and for the early detection of incompatible changes that lead to genetic diseases. PMID:22308074

  15. BGRID: A block-structured grid generation code for wing sections

    NASA Technical Reports Server (NTRS)

    Chen, H. C.; Lee, K. D.

    1981-01-01

    The operation of the BGRID computer program is described for generating block-structured grids. Examples are provided to illustrate the code input and output. The application of a fully implicit AF (approximation factorization)-based computer code, called TWINGB (Transonic WING), for solving the 3D transonic full potential equation in conservation form on block-structured grids is also discussed.

  16. Adaptation of gasdynamical codes to the modern supercomputers

    NASA Astrophysics Data System (ADS)

    Kaygorodov, P. V.

    2016-02-01

    During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.

  17. Self-Avoiding Walks over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.

  18. Modeling scramjet combustor flowfields with a grid adaptation scheme

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Singh, D. J.

    1994-01-01

    The accurate description of flow features associated with the normal injection of fuel into supersonic primary flows is essential in the design of efficient engines for hypervelocity aerospace vehicles. The flow features in such injections are complex with multiple interactions between shocks and between shocks boundary layers. Numerical studies of perpendicular sonic N2 injection and mixing in a Mach 3.8 scramjet combustor environment are discussed. A dynamic grid adaptation procedure based on the equilibration of spring-mass system is employed to enhanced the description of the complicated flow features. Numerical results are compared with experimental measurements and indicate that the adaptation procedure enhances the capability of the modeling procedure to describe the flow features associated with scramjet combustor components.

  19. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  20. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  1. Adaptive sparse grid expansions of the vibrational Hamiltonian

    NASA Astrophysics Data System (ADS)

    Strobusch, D.; Scheurer, Ch.

    2014-02-01

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  2. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  3. The use of solution adaptive grids in solving partial differential equations

    NASA Technical Reports Server (NTRS)

    Anderson, D. A.; Rai, M. M.

    1982-01-01

    The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.

  4. An Adaptive Code for Radial Stellar Model Pulsations

    NASA Astrophysics Data System (ADS)

    Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel

    1997-09-01

    We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.

  5. DRAGON Grid: A Three-Dimensional Hybrid Grid Generation Code Developed

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    2000-01-01

    Because grid generation can consume 70 percent of the total analysis time for a typical three-dimensional viscous flow simulation for a practical engineering device, payoffs from research and development could reduce costs and increase throughputs considerably. In this study, researchers at the NASA Glenn Research Center at Lewis Field developed a new hybrid grid approach with the advantages of flexibility, high-quality grids suitable for an accurate resolution of viscous regions, and a low memory requirement. These advantages will, in turn, reduce analysis time and increase accuracy. They result from an innovative combination of structured and unstructured grids to represent the geometry and the computation domain. The present approach makes use of the respective strengths of both the structured and unstructured grid methods, while minimizing their weaknesses. First, the Chimera grid generates high-quality, mostly orthogonal meshes around individual components. This process is flexible and can be done easily. Normally, these individual grids are required overlap each other so that the solution on one grid can communicate with another. However, when this communication is carried out via a nonconservative interpolation procedure, a spurious solution can result. Current research is aimed at entirely eliminating this undesired interpolation by directly replacing arbitrary grid overlapping with a nonstructured grid called a DRAGON grid, which uses the same set of conservation laws over the entire region, thus ensuring conservation everywhere. The DRAGON grid is shown for a typical film-cooled turbine vane with 33 holes and 3 plenum compartments. There are structured grids around each geometrical entity and unstructured grids connecting them. In fiscal year 1999, Glenn researchers developed and tested the three-dimensional DRAGON grid-generation tools. A flow solver suitable for the DRAGON grid has been developed, and a series of validation tests are underway.

  6. Micro Benchmarking, Performance Assertions and Sensitivity Analysis: A Technique for Developing Adaptive Grid Applications

    SciTech Connect

    Corey, I R; Johnson, J R; Vetter, J S

    2002-02-25

    This study presents a technique that can significantly improve the performance of a distributed application by allowing the application to locally adapt to architectural characteristics of distinct resources in a distributed system. Application performance is sensitive to application parameter--system architecture pairings. In a distributed or Grid enabled applciation, a single parameter configuration for the whole application will not always be optimal for every participating resource. In particular, some configurations can significantly degrade performance. Furthermore, the behavior of a system may change during the course of the run. The technique described here provides an automated mechanism for run-time adaptation of application parameters to the local system architecture. Using a simulation of a Monte Carlo physics code, the authors demonstrate that this technique can achieve speedups of 18%-37% on individual resources in a distributed environment.

  7. TDIGG - TWO-DIMENSIONAL, INTERACTIVE GRID GENERATION CODE

    NASA Technical Reports Server (NTRS)

    Vu, B. T.

    1994-01-01

    TDIGG is a fast and versatile program for generating two-dimensional computational grids for use with finite-difference flow-solvers. Both algebraic and elliptic grid generation systems are included. The method for grid generation by algebraic transformation is based on an interpolation algorithm and the elliptic grid generation is established by solving the partial differential equation (PDE). Non-uniform grid distributions are carried out using a hyperbolic tangent stretching function. For algebraic grid systems, interpolations in one direction (univariate) and two directions (bivariate) are considered. These interpolations are associated with linear or cubic Lagrangian/Hermite/Bezier polynomial functions. The algebraic grids can subsequently be smoothed using an elliptic solver. For elliptic grid systems, the PDE can be in the form of Laplace (zero forcing function) or Poisson. The forcing functions in the Poisson equation come from the boundary or the entire domain of the initial algebraic grids. A graphics interface procedure using the Silicon Graphics (GL) Library is included to allow users to visualize the grid variations at each iteration. This will allow users to interactively modify the grid to match their applications. TDIGG is written in FORTRAN 77 for Silicon Graphics IRIS series computers running IRIX. This package requires either MIT's X Window System, Version 11 Revision 4 or SGI (Motif) Window System. A sample executable is provided on the distribution medium. It requires 148K of RAM for execution. The standard distribution medium is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. This program was developed in 1992.

  8. Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR

    NASA Technical Reports Server (NTRS)

    Marcum, David L.

    2007-01-01

    An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.

  9. Results of investigation of adaptive speech codes

    NASA Astrophysics Data System (ADS)

    Nekhayev, A. L.; Pertseva, V. A.; Sitnyakovskiy, I. V.

    1984-06-01

    A search for ways of increasing the effectiveness of speech signals in digital form lead to the appearance of various methods of encoding, to reduce the excessiveness of specific properties of the speech signal. It is customary to divide speech codes into two large classes: codes of signal parameters (or vocoders), and codes of the signal form (CSF. In telephony, preference is given to a second class of systems, which maintains naturalness of sound. The class of CSF expanded considerably because of the development of codes based on the frequency representation of a signal. The greatest interest is given to such methods of encoding as pulse modulation (PCM), differential PCM (DPCM), and delta modulation (DM). However, developers of digital transmission systems find it difficult to compile a complete pattern of the applicability of specific types of codes. The best known versions of the codes are evaluated by means of subjective-statistical measurements of their characteristics. The results obtained help developers to draw conclusions regarding the applicability of the codes considered in various communication systems.

  10. Volumetric Rendering of Geophysical Data on Adaptive Wavelet Grid

    NASA Astrophysics Data System (ADS)

    Vezolainen, A.; Erlebacher, G.; Vasilyev, O.; Yuen, D. A.

    2005-12-01

    Numerical modeling of geological phenomena frequently involves processes across a wide range of spatial and temporal scales. In the last several years, transport phenomena governed by the Navier-Stokes equations have been simulated in wavelet space using second generation wavelets [1], and most recently on fully adaptive meshes. Our objective is to visualize this time-dependent data using volume rendering while capitalizing on the available sparse data representation. We present a technique for volumetric ray casting of multi-scale datasets in wavelet space. Rather of working with the wavelets at the finest possible resolution, we perform a partial inverse wavelet transform as a preprocessing step to obtain scaling functions on a uniform grid at a user-prescribed resolution. As a result, a function in physical space is represented by a superposition of scaling functions on a coarse regular grid and wavelets on an adaptive mesh. An efficient and accurate ray casting algorithm is based just on these scaling functions. Additional detail is added during the ray tracing by taking an appropriate number of wavelets into account based on support overlap with the interpolation point, wavelet amplitude, and other characteristics, such as opacity accumulation (front to back ordering) and deviation from frontal viewing direction. Strategies for hardware implementation will be presented if available, inspired by the work in [2]. We will pressent error measures as a function of the number of scaling and wavelet functions used for interpolation. Data from mantle convection will be used to illustrate the method. [1] Vasilyev, O.V. and Bowman, C., Second Generation Wavelet Collocation Method for the Solution of Partial Differential Equations. J. Comp. Phys., 165, pp. 660-693, 2000. [2] Guthe, S., Wand, M., Gonser, J., and Straßer, W. Interactive rendering of large volume data sets. In Proceedings of the Conference on Visualization '02 (Boston, Massachusetts, October 27 - November

  11. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  12. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  13. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    SciTech Connect

    Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both

  14. The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  15. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  16. Three-dimensional adaptive grid generation for body-fitted coordinate system

    NASA Astrophysics Data System (ADS)

    Chen, S. C.

    1988-08-01

    This report describes a numerical method for generating 3-D grids for general configurations. The basic method involves the solution of a set of quasi-linear elliptic partial differential equations via pointwise relaxation with a local relaxation factor. It allows specification of the grid spacing off the boundary surfaces and the grid orthogonality at the boundary surfaces. It includes adaptive mechanisms to improve smoothness, orthogonality, and flow resolution in the grid interior.

  17. Three-dimensional adaptive grid generation for body-fitted coordinate system

    NASA Astrophysics Data System (ADS)

    Chen, S. C.

    This report describes a numerical method for generating 3-D grids for general configurations. The basic method involves the solution of a set of quasi-linear elliptic partial differential equations via pointwise relaxation with a local relaxation factor. It allows specification of the grid spacing off the boundary surfaces and the grid orthogonality at the boundary surfaces. It includes adaptive mechanisms to improve smoothness, orthogonality, and flow resolution in the grid interior.

  18. Adaptive Coding and Modulation Scheme for Ka Band Space Communications

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyoon; Yoon, Dongweon; Lee, Wooju

    2010-06-01

    Rain attenuation can cause a serious problem that an availability of space communication link on Ka band becomes low. To reduce the effect of rain attenuation on the error performance of space communications in Ka band, an adaptive coding and modulation (ACM) scheme is required. In this paper, to achieve a reliable telemetry data transmission, we propose an adaptive coding and modulation level using turbo code recommended by the consultative committee for space data systems (CCSDS) and various modulation methods (QPSK, 8PSK, 4+12 APSK, and 4+12+16 APSK) adopted in the digital video broadcasting-satellite2 (DVB-S2).

  19. Numerical Simulation of Two-grid Ion Optics Using a 3D Code

    NASA Technical Reports Server (NTRS)

    Anderson, John R.; Katz, Ira; Goebel, Dan

    2004-01-01

    A three-dimensional ion optics code has been developed under NASA's Project Prometheus to model two grid ion optics systems. The code computes the flow of positive ions from the discharge chamber through the ion optics and into the beam downstream of the thruster. The rate at which beam ions interact with background neutral gas to form charge exchange ions is also computed. Charge exchange ion trajectories are computed to determine where they strike the ion optics grid surfaces and to determine the extent of sputter erosion they cause. The code has been used to compute predictions of the erosion pattern and wear rate on the NSTAR ion optics system; the code predicts the shape of the eroded pattern but overestimates the initial wear rate by about 50%. An example of use of the code to estimate the NEXIS thruster accelerator grid life is also presented.

  20. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  1. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  2. Analysis of a Major Electric Grid -- Stability and Adaptive Protection

    NASA Astrophysics Data System (ADS)

    Alanzi, Sultan

    Protective systems of the electric grid are designed to detect and mitigate the effects of faults and other disturbances that may occur. Distance relays are used extensively for the detection of faults on transmission lines. Out-of-step relays are used for generator protection to detect loss of synchronism conditions that result from disturbances on the electric grid. Also, when a disturbance occurs and generators may tend to lose synchronism with each other, it is beneficial to separate the overall system into several independent systems that can remain stable. Unfortunately there have been cases, such as the 2003 Northeast blackout where the operation of protective relays, namely the zone 3 distance relay used for transmission line protection, contributed to the cascading effect of the blackout. It is the objective of this dissertation to propose adaptive relays for both distance protection of transmission lines and out-of-step protection of generators. By being adaptive, the relays are made aware of the system operating conditions and can adjust its settings accordingly. Inputs to the adaptive logic can come from system or environmental conditions. As a result of this effort, a new distance relay operating characteristic is proposed, referred to as a mushroom relay, which is a combination of a quadrilateral relay and a Mho relay. Also, a new criterion for determining if a power swing following a disturbance is stable or unstable is proposed. Distance protection of transmission lines is very important when discussing system responses to faults and disturbances. Distance relays are very common worldwide and although they offer great protection, there are limitations that need to be addressed. Parallel line operations (infeed effect) and the loadability limits are among the limitations that lead to improper response of relays. An Adaptive Distance Relays (ADR) offer great benefits to the protection scheme as their settings can be changed in accordance with prefault

  3. Analysis of a Major Electric Grid -- Stability and Adaptive Protection

    NASA Astrophysics Data System (ADS)

    Alanzi, Sultan

    Protective systems of the electric grid are designed to detect and mitigate the effects of faults and other disturbances that may occur. Distance relays are used extensively for the detection of faults on transmission lines. Out-of-step relays are used for generator protection to detect loss of synchronism conditions that result from disturbances on the electric grid. Also, when a disturbance occurs and generators may tend to lose synchronism with each other, it is beneficial to separate the overall system into several independent systems that can remain stable. Unfortunately there have been cases, such as the 2003 Northeast blackout where the operation of protective relays, namely the zone 3 distance relay used for transmission line protection, contributed to the cascading effect of the blackout. It is the objective of this dissertation to propose adaptive relays for both distance protection of transmission lines and out-of-step protection of generators. By being adaptive, the relays are made aware of the system operating conditions and can adjust its settings accordingly. Inputs to the adaptive logic can come from system or environmental conditions. As a result of this effort, a new distance relay operating characteristic is proposed, referred to as a mushroom relay, which is a combination of a quadrilateral relay and a Mho relay. Also, a new criterion for determining if a power swing following a disturbance is stable or unstable is proposed. Distance protection of transmission lines is very important when discussing system responses to faults and disturbances. Distance relays are very common worldwide and although they offer great protection, there are limitations that need to be addressed. Parallel line operations (infeed effect) and the loadability limits are among the limitations that lead to improper response of relays. An Adaptive Distance Relays (ADR) offer great benefits to the protection scheme as their settings can be changed in accordance with prefault

  4. Adaptive data management in the ARC Grid middleware

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Gholami, A.; Karpenko, D.; Konstantinov, A.

    2011-12-01

    The Advanced Resource Connector (ARC) Grid middleware was designed almost 10 years ago, and has proven to be an attractive distributed computing solution and successful in adapting to new data management and storage technologies. However, with an ever-increasing user base and scale of resources to manage, along with the introduction of more advanced data transfer protocols, some limitations in the current architecture have become apparent. The simple first-in first-out approach to data transfer leads to bottlenecks in the system, as does the built-in assumption that all data is immediately available from remote data storage. We present an entirely new data management architecture for ARC which aims to alleviate these problems, by introducing a three-layer structure. The top layer accepts incoming requests for data transfer and directs them to the middle layer, which schedules individual transfers and negotiates with various intermediate catalog and storage systems until the physical file is ready to be transferred. The lower layer performs all operations which use large amounts of bandwidth, i.e. the physical data transfer. Using such a layered structure allows more efficient use of the available bandwidth as well as enabling late-binding of jobs to data transfer slots based on a priority system. Here we describe in full detail the design and implementation of the new system.

  5. Adaptive-grid methods for time-dependent partial differential equations

    SciTech Connect

    Hedstrom, G.W.; Rodrique, G.H.

    1981-01-01

    This paper contains a survey of recent developments of adaptive-grid algorithms for time-dependent partial differential equations. Two lines of research are discussed. One involves the automatic selection of moving grids to follow propagating waves. The other is based on stationary grids but uses local mesh refinement in both space and time. Advantages and disadvantages of both approaches are discussed. The development of adaptive-grid schemes shows promise of greatly increasing our ability to solve problems in several spatial dimensions.

  6. Generation and adaptation of 3-D unstructured grids for transient problems

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald

    1990-01-01

    Grid generation and adaptive refinement techniques suitable for the simulation of strongly unsteady flows past geometrically complex bodies in 3-D are described. The grids are generated using the advancing front technique. Emphasis is placed not to generate elements that are too small, as this would severely increase the cost of simulations with explicit flow solvers. The grids are adapted to an evolving flowfield using simple h-refinement. A grid change is performed every 5 to 10 timesteps, and only one level of refinement/coarsening is allowed per mesh change.

  7. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  8. Automated Grid Disruption Response System: Robust Adaptive Topology Control (RATC)

    SciTech Connect

    2012-03-01

    GENI Project: The RATC research team is using topology control as a mechanism to improve system operations and manage disruptions within the electric grid. The grid is subject to interruption from cascading faults caused by extreme operating conditions, malicious external attacks, and intermittent electricity generation from renewable energy sources. The RATC system is capable of detecting, classifying, and responding to grid disturbances by reconfiguring the grid in order to maintain economically efficient operations while guaranteeing reliability. The RATC system would help prevent future power outages, which account for roughly $80 billion in losses for businesses and consumers each year. Minimizing the time it takes for the grid to respond to expensive interruptions will also make it easier to integrate intermittent renewable energy sources into the grid.

  9. Enhancement of surface definition and gridding in the EAGLE code

    NASA Technical Reports Server (NTRS)

    Thompson, Joe F.

    1991-01-01

    Algorithms for smoothing of curves and surfaces for the EAGLE grid generation program are presented. The method uses an existing automated technique which detects undesirable geometric characteristics by using a local fairness criterion. The geometry entity is then smoothed by repeated removal and insertion of spline knots in the vicinity of the geometric irregularity. The smoothing algorithm is formulated for use with curves in Beta spline form and tensor product B-spline surfaces.

  10. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  11. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  12. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  13. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  14. Simulation of the dispersion of nuclear contamination using an adaptive Eulerian grid model.

    PubMed

    Lagzi, I; Kármán, D; Turányi, T; Tomlin, A S; Haszpra, L

    2004-01-01

    Application of an Eulerian model using layered adaptive unstructured grids coupled to a meso-scale meteorological model is presented for modelling the dispersion of nuclear contamination following the accidental release from a single but strong source to the atmosphere. The model automatically places a finer resolution grid, adaptively in time, in regions were high spatial numerical error is expected. The high-resolution grid region follows the movement of the contaminated air over time. Using this method, grid resolutions of the order of 6 km can be achieved in a computationally effective way. The concept is illustrated by the simulation of hypothetical nuclear accidents at the Paks NPP, in Central Hungary. The paper demonstrates that the adaptive model can achieve accuracy comparable to that of a high-resolution Eulerian model using significantly less grid points and computer simulation time. PMID:15149762

  15. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  16. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...

  17. Adaptive Prediction Error Coding in the Human Midbrain and Striatum Facilitates Behavioral Adaptation and Learning Efficiency.

    PubMed

    Diederen, Kelly M J; Spencer, Tom; Vestergaard, Martin D; Fletcher, Paul C; Schultz, Wolfram

    2016-06-01

    Effective error-driven learning benefits from scaling of prediction errors to reward variability. Such behavioral adaptation may be facilitated by neurons coding prediction errors relative to the standard deviation (SD) of reward distributions. To investigate this hypothesis, we required participants to predict the magnitude of upcoming reward drawn from distributions with different SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. In line with the notion of adaptive coding, BOLD response slopes in the Substantia Nigra/Ventral Tegmental Area (SN/VTA) and ventral striatum were steeper for prediction errors occurring in distributions with smaller SDs. SN/VTA adaptation was not instantaneous but developed across trials. Adaptive prediction error coding was paralleled by behavioral adaptation, as reflected by SD-dependent changes in learning rate. Crucially, increased SN/VTA and ventral striatal adaptation was related to improved task performance. These results suggest that adaptive coding facilitates behavioral adaptation and supports efficient learning. PMID:27181060

  18. An efficient second-order accurate and continuous interpolation for block-adaptive grids

    NASA Astrophysics Data System (ADS)

    Borovikov, Dmitry; Sokolov, Igor V.; Tóth, Gábor

    2015-09-01

    In this paper we present a second-order and continuous interpolation algorithm for cell-centered adaptive-mesh-refinement (AMR) grids. Continuity requirement poses a non-trivial problem at resolution changes. We develop a classification of the resolution changes, which allows us to employ efficient and simple linear interpolation in the majority of the computational domain. The algorithm is well suited for massively parallel computations. Our interpolation algorithm allows extracting jump-free interpolated data distribution along lines and surfaces within the computational domain. This capability is important for various applications, including kinetic particles tracking in three dimensional vector fields, visualization (i.e. surface extraction) and extracting variables along one-dimensional curves such as field lines, streamlines and satellite trajectories, etc. Particular examples are models for acceleration of solar energetic particles (SEPs) along magnetic field-lines. As such models are sensitive to sharp gradients and discontinuities the capability to interpolate the data from the AMR grid to be passed to the SEP model without producing false gradients numerically becomes crucial. We provide a complete description of the algorithm and make the code publicly available as a Fortran 90 library.

  19. Simulations of implosions with a 3D, parallel, unstructured-grid, radiation-hydrodynamics code

    SciTech Connect

    Kaiser, T B; Milovich, J L; Prasad, M K; Rathkopf, J; Shestakov, A I

    1998-12-28

    An unstructured-grid, radiation-hydrodynamics code is used to simulate implosions. Although most of the problems are spherically symmetric, they are run on 3D, unstructured grids in order to test the code's ability to maintain spherical symmetry of the converging waves. Three problems, of increasing complexity, are presented. In the first, a cold, spherical, ideal gas bubble is imploded by an enclosing high pressure source. For the second, we add non-linear heat conduction and drive the implosion with twelve laser beams centered on the vertices of an icosahedron. In the third problem, a NIF capsule is driven with a Planckian radiation source.

  20. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  1. STELLA: A domain-specific embedded language for stencil codes on structured grids

    NASA Astrophysics Data System (ADS)

    Gysi, Tobias; Fuhrer, Oliver; Osuna, Carlos; Cumming, Benjamin; Schulthess, Thomas

    2014-05-01

    Adapting regional weather and climate models (RCMs) for hybrid many-core computing architectures is a formidable challenge. Achieving high performance on different supercomputing architectures while retaining a single source code are often perceived as contradicting goals. Typically, the numerical algorithms employed are tightly inter-twined with hardware dependent implementation choices and optimizations such as for example data-structures and loop order. While Fortran is currently the de-facto standard for programming RCMs, no single such standard for porting such models to graphics processing units (GPUs) has yet emerged. The approaches used can be grouped into three main categories: compiler directives (OpenACC, PGI compiler directives), custom programming languages (CUDA, OpenCL) and domain-specific libraries or languages. STELLA (STencil Loop LAnguage) is a domain-specific embedded language (DSEL) built using generic programming in C++ which is targeted at stencil codes on structured grids. It allows a high-level specification of the algorithm while separating hardware dependent implementation details into back-ends. Currently, a back-end for multi-core CPUs using the OpenMP programming model and a back-end for NVIDIA GPUs using the CUDA programming mode has been developed. We will present the domain-specific language and its features such as software managed caching. With the example of an implementation of the dynamical core of a RCM (COSMO) we will compare performance with respect to the original Fortran implementation both on both CPUs and GPUs. Finally, we will discuss advantages and disadvantages of our approach as compared to other approaches such as source-to-source translators.

  2. Adaptive gridding strategies for Free-Lagrangian calculations of low speed flows

    NASA Astrophysics Data System (ADS)

    Fritts, Martin J.

    1988-01-01

    Free-Lagrangian methods have been employed in two-dimensional simulations of the long-term evolution of fluid instabilities for low speed flows. For example, calculations of the Rayleigh-Taylor instability have proceeded through the inversion and mixing of two fluid layers and simulations of droplet deformations have continued well beyond droplet shattering. The freedom to choose grid connections permits several important benefits for these calculations. 1. Mass conservation is enforced for all individual fluid elements. 2. Vertex movement is always Lagrangian. 3. Grid adjustments can be made automatically, with no user intervention. 4. Grid connections may be selected to ensure accuracy in the difference equations. 5. Adaptive gridding schemes are local, adding and deleting vertices as dictated by local accuracy estimators. 6. Any geometric configuration may be easily gridded, for any vertex distribution on the boundaries or in the interior of the fluids. This paper will review some two-dimensional results, with the emphasis on the adaptive gridding algorithms and the accuracy of the resultant difference templates for the mathematical operators. The relation of the triangular mesh to the Voronoi mesh will be explored, particularly for the case when they are dual meshes. Three-dimensional algorithms for adaptive gridding will be presented which are exact analogues to the two-dimensional case. Gridding efficiencies will be discussed for several schemes.

  3. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  4. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  5. Preprocessor that Enables the Use of GridProTM Grids for Unsteady Reynolds-Averaged Navier-Stokes Code TURBO

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram

    2010-01-01

    A preprocessor for the Computational Fluid Dynamics (CFD) code TURBO has been developed and tested. The preprocessor converts grids produced by GridPro (Program Development Company (PDC)) into a format readable by TURBO and generates the necessary input files associated with the grid. The preprocessor also generates information that enables the user to decide how to allocate the computational load in a multiple block per processor scenario.

  6. Adaptation improves neural coding efficiency despite increasing correlations in variability.

    PubMed

    Adibi, Mehdi; McDonald, James S; Clifford, Colin W G; Arabzadeh, Ehsan

    2013-01-30

    Exposure of cortical cells to sustained sensory stimuli results in changes in the neuronal response function. This phenomenon, known as adaptation, is a common feature across sensory modalities. Here, we quantified the functional effect of adaptation on the ensemble activity of cortical neurons in the rat whisker-barrel system. A multishank array of electrodes was used to allow simultaneous sampling of neuronal activity. We characterized the response of neurons to sinusoidal whisker vibrations of varying amplitude in three states of adaptation. The adaptors produced a systematic rightward shift in the neuronal response function. Consistently, mutual information revealed that peak discrimination performance was not aligned to the adaptor but to test amplitudes 3-9 μm higher. Stimulus presentation reduced single neuron trial-to-trial response variability (captured by Fano factor) and correlations in the population response variability (noise correlation). We found that these two types of variability were inversely proportional to the average firing rate regardless of the adaptation state. Adaptation transferred the neuronal operating regime to lower rates with higher Fano factor and noise correlations. Noise correlations were positive and in the direction of signal, and thus detrimental to coding efficiency. Interestingly, across all population sizes, the net effect of adaptation was to increase the total information despite increasing the noise correlation between neurons. PMID:23365247

  7. Adaptive norm-based coding of facial identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda

    2006-09-01

    Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736

  8. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  9. Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Toward CFD Vision 2030

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.

    2016-01-01

    Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.

  10. Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme

    NASA Astrophysics Data System (ADS)

    Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi

    2004-02-01

    A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

  11. Grid coupling mechanism in the semi-implicit adaptive Multi-Level Multi-Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, M. E.; Tronci, C.; Markidis, S.; Lapenta, G.

    2016-05-01

    The Multi-Level Multi-Domain (MLMD) method is a semi-implicit adaptive method for Particle-In-Cell plasma simulations. It has been demonstrated in the past in simulations of Maxwellian plasmas, electrostatic and electromagnetic instabilities, plasma expansion in vacuum, magnetic reconnection [1, 2, 3]. In multiple occasions, it has been commented on the coupling between the coarse and the refined grid solutions. The coupling mechanism itself, however, has never been explored in depth. Here, we investigate the theoretical bases of grid coupling in the MLMD system. We obtain an evolution law for the electric field solution in the overlap area of the MLMD system which highlights a dependance on the densities and currents from both the coarse and the refined grid, rather than from the coarse grid alone: grid coupling is obtained via densities and currents.

  12. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    NASA Astrophysics Data System (ADS)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  13. Development of a multi-grid FDTD code for three-dimensional simulation of large microwave sintering experiments

    SciTech Connect

    White, M.J.; Iskander, M.F.; Kimrey, H.D.

    1996-12-31

    The Finite-Difference Time-Domain (FDTD) code available at the University of Utah has been used to simulate sintering of ceramics in single and multimode cavities, and many useful results have been reported in literature. More detailed and accurate results, specifically around and including the ceramic sample, are often desired to help evaluate the adequacy of the heating procedure. In electrically large multimode cavities, however, computer memory requirements limit the number of the mathematical cells, and the desired resolution is impractical to achieve due to limited computer resources. Therefore, an FDTD algorithm which incorporates multiple-grid regions with variable-grid sizes is required to adequately perform the desired simulations. In this paper the authors describe the development of a three-dimensional multi-grid FDTD code to help focus a large number of cells around the desired region. Test geometries were solved using a uniform-grid and the developed multi-grid code to help validate the results from the developed code. Results from these comparisons, as well as the results of comparisons between the developed FDTD code and other available variable-grid codes are presented. In addition, results from the simulation of realistic microwave sintering experiments showed improved resolution in critical sites inside the three-dimensional sintering cavity. With the validation of the FDTD code, simulations were performed for electrically large, multimode, microwave sintering cavities to fully demonstrate the advantages of the developed multi-grid FDTD code.

  14. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  15. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  16. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  17. Adaptive grid finite element model of the tokamak scrapeoff layer

    SciTech Connect

    Kuprat, A.P.; Glasser, A.H.

    1995-07-01

    The authors discuss unstructured grids for application to transport in the tokamak edge SOL. They have developed a new metric with which to judge element elongation and resolution requirements. Using this method, the authors apply a standard moving finite element technique to advance the SOL equations while inserting/deleting dynamically nodes that violate an elongation criterion. In a tokamak plasma, this method achieves a more uniform accuracy, and results in highly stretched triangular finite elements, except near separatrix X-point where transport is more isotropic.

  18. A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid

    NASA Astrophysics Data System (ADS)

    Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-01

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  19. A general hybrid radiation transport scheme for star formation simulations on an adaptive grid

    SciTech Connect

    Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-10

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  20. Overview of the NASA Glenn Flux Reconstruction Based High-Order Unstructured Grid Code

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; DeBonis, James R.; Huynh, H. T.

    2016-01-01

    A computational fluid dynamics code based on the flux reconstruction (FR) method is currently being developed at NASA Glenn Research Center to ultimately provide a large- eddy simulation capability that is both accurate and efficient for complex aeropropulsion flows. The FR approach offers a simple and efficient method that is easy to implement and accurate to an arbitrary order on common grid cell geometries. The governing compressible Navier-Stokes equations are discretized in time using various explicit Runge-Kutta schemes, with the default being the 3-stage/3rd-order strong stability preserving scheme. The code is written in modern Fortran (i.e., Fortran 2008) and parallelization is attained through MPI for execution on distributed-memory high-performance computing systems. An h- refinement study of the isentropic Euler vortex problem is able to empirically demonstrate the capability of the FR method to achieve super-accuracy for inviscid flows. Additionally, the code is applied to the Taylor-Green vortex problem, performing numerous implicit large-eddy simulations across a range of grid resolutions and solution orders. The solution found by a pseudo-spectral code is commonly used as a reference solution to this problem, and the FR code is able to reproduce this solution using approximately the same grid resolution. Finally, an examination of the code's performance demonstrates good parallel scaling, as well as an implementation of the FR method with a computational cost/degree- of-freedom/time-step that is essentially independent of the solution order of accuracy for structured geometries.

  1. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  2. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  3. Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination.

    PubMed

    Thomas, Blake T; Blalock, Davis W; Levy, William B

    2015-07-01

    Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744

  4. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  5. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  6. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  7. White Light Schlieren Optics Using Bacteriorhodopsin as an Adaptive Image Grid

    NASA Technical Reports Server (NTRS)

    Peale, Robert; Ruffin, Boh; Donahue, Jeff; Barrett, Carolyn

    1996-01-01

    A Schlieren apparatus using a bacteriorhodopsin film as an adaptive image grid with white light illumination is demonstrated for the first time. The time dependent spectral properties of the film are characterized. Potential applications include a single-ended Schlieren system for leak detection.

  8. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  9. Generalized Monge-Kantorovich optimization for grid generation and adaptation in LP

    SciTech Connect

    Delzanno, G L; Finn, J M

    2009-01-01

    The Monge-Kantorovich grid generation and adaptation scheme of is generalized from a variational principle based on L{sub 2} to a variational principle based on L{sub p}. A generalized Monge-Ampere (MA) equation is derived and its properties are discussed. Results for p > 1 are obtained and compared in terms of the quality of the resulting grid. We conclude that for the grid generation application, the formulation based on L{sub p} for p close to unity leads to serious problems associated with the boundary. Results for 1.5 {approx}< p {approx}< 2.5 are quite good, but there is a fairly narrow range around p = 2 where the results are close to optimal with respect to grid distortion. Furthermore, the Newton-Krylov methods used to solve the generalized MA equation perform best for p = 2.

  10. Grid noise in moving mesh codes: fixing the volume inconsistency problem

    NASA Astrophysics Data System (ADS)

    Steinberg, Elad; Yalinewich, Almog; Sari, Re'em

    2016-06-01

    Current Voronoi-based moving mesh hydro codes suffer from `grid noise'. We identify the cause of this noise as the volume inconsistency error, where the volume that is transferred between cells is inconsistent with the hydrodynamical calculations. As a result, the codes do not achieve second-order convergence. In this paper we describe how a simple fix allows Voronoi-based moving mesh codes to attain second-order convergence. The fix is based on the understanding that the volume exchanged between cells should be consistent with the hydrodynamical calculations. We benchmark our fix with three test problems and show that it can significantly improve the computational accuracy. We also examine the effect of initial mesh initialization and present an improved model for the Green-Gauss-based gradient estimator.

  11. Multigrid-based simulation code for mantle convection in spherical shell using Yin Yang grid

    NASA Astrophysics Data System (ADS)

    Kameyama, Masanori; Kageyama, Akira; Sato, Tetsuya

    2008-12-01

    A new simulation code of mantle convection in a three-dimensional spherical shell is presented. Major innovation of the code comes from an combination of two numerical techniques, namely Yin-Yang grid and ACuTE algorithm, which we had developed for large-scale simulations of solid earth sciences. Benchmark comparisons for the steady convection for low Rayleigh numbers ( Ra) with previous calculations revealed that accurate results are successfully reproduced not only for isoviscous cases but also for the cases where the mild temperature-dependence of viscosity is included. We also demonstrated that our code can reproduce the change in convective flow patterns into the "sluggish-lid" regime with increasing the viscosity variation rη up to 104.

  12. Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.

    2010-01-01

    In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant

  13. Carving and adaptive drainage enforcement of grid digital elevation models

    NASA Astrophysics Data System (ADS)

    Soille, Pierre; Vogt, Jürgen; Colombo, Roberto

    2003-12-01

    An effective and widely used method for removing spurious pits in digital elevation models consists of filling them until they overflow. However, this method sometimes creates large flat regions which in turn pose a problem for the determination of accurate flow directions. In this study, we propose to suppress each pit by creating a descending path from it to the nearest point having a lower elevation value. This is achieved by carving, i.e., lowering, the terrain elevations along the detected path. Carving paths are identified through a flooding simulation starting from the river outlets. The proposed approach allows for adaptive drainage enforcement whereby river networks coming from other data sources are imposed to the digital elevation model only in places where the automatic river network extraction deviates substantially from the known networks. An improvement to methods for routing flow over flat regions is also introduced. Detailed results are presented over test areas of the Danube basin.

  14. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  15. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  16. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  17. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  18. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  19. Grid-based Parallel Data Streaming Implemented for the Gyrokinetic Toroidal Code

    SciTech Connect

    S. Klasky; S. Ethier; Z. Lin; K. Martins; D. McCune; R. Samtaney

    2003-09-15

    We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientist's home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study microturbulence in magnetic confinement fusion from first principles plasma theory.

  20. Adaptive Harmonic Detection Control of Grid Interfaced Solar Photovoltaic Energy System with Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Singh, B.; Goel, S.

    2015-03-01

    This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.

  1. A new procedure for dynamic adaption of three-dimensional unstructured grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.

  2. Radiation Coupling with the FUN3D Unstructured-Grid CFD Code

    NASA Technical Reports Server (NTRS)

    Wood, William A.

    2012-01-01

    The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.

  3. Iso-deviant 2D gridding with efficient adaptive gridder for littoral environments (EAGLE)

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2005-09-01

    Transmission loss (TL) computations in littoral areas require a dense spatial and azimuthal grid to achieve acceptable accuracy and detail. The computational cost of accurate predictions led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces sparse, irregular acoustic grids, with controlled accuracy. Recent work to further increase accuracy and efficiency with better metrics and interpolation led to EAGLE (Efficient Adaptive Gridder for Littoral Environments). On each iteration, EAGLE produces grids with approximately constant spatial uncertainty (hence, iso-deviance), yielding predictions with ever-increasing resolution and accuracy. The EAGLE point-selection mechanism is tested using the predictive error metric and 2D synthetic data sets created from combinations of simple signal functions (e.g., polynomials, sines, cosines, exponentials), along with white and chromatic noise. The speed, efficiency, fidelity, and iso-deviance of EAGLE are determined for each combination of signal, noise, and interpolator. The results show significant efficiency enhancements compared to uniform grids of the same accuracy. [Work sponsored by NAVAIR.

  4. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  5. EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre

    2015-11-01

    EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.

  6. Adaptive lifting scheme with sparse criteria for image coding

    NASA Astrophysics Data System (ADS)

    Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2012-12-01

    Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

  7. Adaptive phase-coded reconstruction for cardiac CT

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su

    2000-04-01

    Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.

  8. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  9. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.

    PubMed

    Boschitsch, Alexander H; Fenley, Marcia O

    2011-05-10

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous

  10. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  11. Adaptive grid artifact reduction in the frequency domain with spatial properties for x-ray images

    NASA Astrophysics Data System (ADS)

    Kim, Dong Sik; Lee, Sanggyun

    2012-03-01

    By applying band-rejection filters (BRFs) in the frequency domain, we can efficiently reduce the grid artifacts, which are caused by using the antiscatter grid in obtaining x-ray digital images. However, if the frequency component of the grid artifact is relatively close to that of the object, then simply applying a BRF may seriously distort the object and cause the ringing artifacts. Since the ringing artifacts are quite dependent on the shape of the object to be recovered in the spatial domain, the spatial property of the x-ray image should be considered in applying BRFs. In this paper, we propose an adaptive filtering scheme, which can cooperate such different properties in the spatial domain. In the spatial domain, we compare several approaches, such as the mangnitude, edge, and frequency-modulation (FM) model-based algorithms, to detect the ringing artifact or the grid artifact component. In order to perform a robust detection whether the ringing artifact is strong or not, we employ the FM model for the extracted signal, which corresponds to a specific grid artifact. A detection of the position for the ringing artifact is then conducted based on the slope detection algorithm, which is commonly used as an FM discriminator in the communication area. However, the detected position of the ringing artifact is not accurate. Hence, in order to obtain an accurate detection result, we combine the edge-based approach with the FM model approach. Numerical result for real x-ray images shows that applying BRFs in the frequency domain in conjunction with the spatial property of the ringing artifact can successfully remove the grid artifact, distorting the object less.

  12. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGESBeta

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  13. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  14. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  15. The Volume Grid Manipulator (VGM): A Grid Reusability Tool

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    This document is a manual describing how to use the Volume Grid Manipulation (VGM) software. The code is specifically designed to alter or manipulate existing surface and volume structured grids to improve grid quality through the reduction of grid line skewness, removal of negative volumes, and adaption of surface and volume grids to flow field gradients. The software uses a command language to perform all manipulations thereby offering the capability of executing multiple manipulations on a single grid during an execution of the code. The command language can be input to the VGM code by a UNIX style redirected file, or interactively while the code is executing. The manual consists of 14 sections. The first is an introduction to grid manipulation; where it is most applicable and where the strengths of such software can be utilized. The next two sections describe the memory management and the manipulation command language. The following 8 sections describe simple and complex manipulations that can be used in conjunction with one another to smooth, adapt, and reuse existing grids for various computations. These are accompanied by a tutorial section that describes how to use the commands and manipulations to solve actual grid generation problems. The last two sections are a command reference guide and trouble shooting sections to aid in the use of the code as well as describe problems associated with generated scripts for manipulation control.

  16. A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids

    NASA Technical Reports Server (NTRS)

    Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.

    1993-01-01

    Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.

  17. Experiences with the application of the ADIC automatic differentiation tool for to the CSCMDO 3-D volume grid generation code

    SciTech Connect

    Bischof, C.H.; Mauer, A.; Jones, W.T.

    1995-12-31

    Automatic differentiation (AD) is a methodology for developing reliable sensitivity-enhanced versions of arbitrary computer programs with little human effort. It can vastly accelerate the use of advanced simulation codes in multidisciplinary design optimization, since the time for generating and verifying derivative codes is greatly reduced. In this paper, we report on the application of the recently developed ADIC automatic differentiation tool for ANSI C programs to the CSCMDO multiblock three-dimensional volume grid generator. The ADIC-generated code can easily be interfaced with Fortran derivative codes generated with the ADIFOR AD tool FORTRAN 77 programs, thus providing efficient sensitivity-enhancement techniques for multilanguage, multidiscipline problems.

  18. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  19. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  20. The implementation of the graphics of program EAGLE: A numerical grid generation code on NASA Langley SNS computer system

    NASA Technical Reports Server (NTRS)

    Houston, Johnny L.

    1989-01-01

    Program EAGLE (Eglin Arbitrary Geometry Implicit Euler) Numerical Grid Generation System is a composite (multi-block) algebraic or elliptic grid generation system designed to discretize the domain in and/or around any arbitrarily shaped three dimensional regions. This system combines a boundary conforming surface generation scheme and includes plotting routines designed to take full advantage of the DISSPLA Graphics Package (Version 9.0). Program EAGLE is written to compile and execute efficiently on any Cray machine with or without solid state disk (SSD) devices. Also, the code uses namelist inputs which are supported by all Cray machines using the FORTRAN compiler CFT77. The namelist inputs makes it easier for the user to understand the inputs and operation of Program EAGLE. EAGLE's numerical grid generator is constructed in the following form: main program, EGG (executive routine); subroutine SURFAC (surface generation routine); subroutine GRID (grid generation routine); and subroutine GRDPLOT (grid plotting routines). The EAGLE code was modified to use on the NASA-LaRC SNS computer (Cray 2S) system. During the modification a conversion program was developed for the output data of EAGLE's subroutine GRID to permit the data to be graphically displayed by IRIS workstations, using Plot3D. The code of program EAGLE was modified to make operational subroutine GRDPLOT (using DI-3000 Graphics Software Packages) on the NASA-LaRC SNS Computer System. How to implement graphically, the output data of subroutine GRID was determined on any NASA-LaRC graphics terminal that has access to the SNS Computer System DI-300 Graphics Software Packages. A Quick Reference User Guide was developed for the use of program EAGLE on the NASA-LaRC SNS Computer System. One or more application program(s) was illustrated using program EAGLE on the NASA LaRC SNS Computer System, with emphasis on graphics illustrations.

  1. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of

  2. SUMIT: a computer code to interpolate and sum single release atmospheric model results onto a master grid

    SciTech Connect

    Begovich, C.L.; DeBliek, N.J.; Holdeman, J.T. Jr.; Sjoreen, A.L.; Miller, C.W.

    1984-10-01

    This report describes a computer code for the Systematic Unification of Multiple Input Tables of data (SUMIT). This code is designed to be an integral part of the Computerized Radiological Risk Investigation System (CRRIS) for assessing the health impacts of airborne releases of radioactive pollutants. SUMIT reads radionuclide air concentrations and ground deposition rates for different release points and combines them over a specified master grid. The resulting SUMIT grid may be circular, rectangular, or consist of irregularly spaced points. SUMIT can apply a different scaling factor to all data from each source. This program is designed to sum data written by the CRRIS code ANEMOS. Of course, SUMIT could read any data organized in the same manner at ANEMOS output. Descriptions of the necessary user input and data files are provided along with a complete listing of the SUMIT code. 10 references, 4 figures, 2 tables.

  3. Evaluating two sparse grid surrogates and two adaptation criteria for groundwater Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Ye, Ming; Burkardt, John; Wu, Jichun; Wang, Dong; Zhu, Xiaobin

    2016-04-01

    Sparse grid (SG) stochastic collocation methods have been recently used to build accurate but cheap-to-run surrogates for groundwater models to reduce the computational burden of Bayesian uncertainty analysis. The surrogates can be built for either a log-likelihood function or state variables such as hydraulic head and solute concentration. Using a synthetic groundwater flow model, this study evaluates the log-likelihood and head surrogates in terms of the computational cost of building them, the accuracy of the surrogates, and the accuracy of the distributions of model parameters and predictions obtained using the surrogates. The head surrogates outperform the log-likelihood surrogates for the following four reasons: (1) the shape of the head response surface is smoother than that of the log-likelihood response surface in parameter space, (2) the head variation is smaller than the log-likelihood variation in parameter space, (3) the interpolation error of the head surrogates does not accumulate to be larger than the interpolation error of the log-likelihood surrogates, and (4) the model simulations needed for building one head surrogate can be recycled for building others. For both log-likelihood and head surrogates, adaptive sparse grids are built using two indicators: absolute error and relative error. The adaptive head surrogates are insensitive to the error indicators, because the ratio between the two indicators is hydraulic head, which has small variation in the parameter space. The adaptive log-likelihood surrogates based on the relative error indicators outperform those based on the absolute error indicators, because adaptation based on the relative error indicator puts more sparse-grid nodes in the areas in the parameter space where the log-likelihood is high. While our numerical study suggests building state-variable surrogates and using the relative error indicator for building log-likelihood surrogates, selecting appropriate type of surrogates and

  4. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  5. Euler technology assessment for preliminary aircraft design employing OVERFLOW code with multiblock structured-grid method

    NASA Technical Reports Server (NTRS)

    Treiber, David A.; Muilenburg, Dennis A.

    1995-01-01

    The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.

  6. Fair Energy Scheduling for Vehicle-to-Grid Networks Using Adaptive Dynamic Programming.

    PubMed

    Xie, Shengli; Zhong, Weifeng; Xie, Kan; Yu, Rong; Zhang, Yan

    2016-08-01

    Research on the smart grid is being given enormous supports worldwide due to its great significance in solving environmental and energy crises. Electric vehicles (EVs), which are powered by clean energy, are adopted increasingly year by year. It is predictable that the huge charge load caused by high EV penetration will have a considerable impact on the reliability of the smart grid. Therefore, fair energy scheduling for EV charge and discharge is proposed in this paper. By using the vehicle-to-grid technology, the scheduler controls the electricity loads of EVs considering fairness in the residential distribution network. We propose contribution-based fairness, in which EVs with high contributions have high priorities to obtain charge energy. The contribution value is defined by both the charge/discharge energy and the timing of the action. EVs can achieve higher contribution values when discharging during the load peak hours. However, charging during this time will decrease the contribution values seriously. We formulate the fair energy scheduling problem as an infinite-horizon Markov decision process. The methodology of adaptive dynamic programming is employed to maximize the long-term fairness by processing online network training. The numerical results illustrate that the proposed EV energy scheduling is able to mitigate and flatten the peak load in the distribution network. Furthermore, contribution-based fairness achieves a fast recovery of EV batteries that have deeply discharged and guarantee fairness in the full charge time of all EVs. PMID:26930694

  7. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775

  8. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  9. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  10. Grid Generation Techniques Utilizing the Volume Grid Manipulator

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1998-01-01

    This paper presents grid generation techniques available in the Volume Grid Manipulation (VGM) code. The VGM code is designed to manipulate existing line, surface and volume grids to improve the quality of the data. It embodies an easy to read rich language of commands that enables such alterations as topology changes, grid adaption and smoothing. Additionally, the VGM code can be used to construct simplified straight lines, splines, and conic sections which are common curves used in the generation and manipulation of points, lines, surfaces and volumes (i.e., grid data). These simple geometric curves are essential in the construction of domain discretizations for computational fluid dynamic simulations. By comparison to previously established methods of generating these curves interactively, the VGM code provides control of slope continuity and grid point-to-point stretchings as well as quick changes in the controlling parameters. The VGM code offers the capability to couple the generation of these geometries with an extensive manipulation methodology in a scripting language. The scripting language allows parametric studies of a vehicle geometry to be efficiently performed to evaluate favorable trends in the design process. As examples of the powerful capabilities of the VGM code, a wake flow field domain will be appended to an existing X33 Venturestar volume grid; negative volumes resulting from grid expansions to enable flow field capture on a simple geometry, will be corrected; and geometrical changes to a vehicle component of the X33 Venturestar will be shown.

  11. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  12. COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.

    2012-05-08

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of

  13. An adaptive discretization of incompressible flow using a multitude of moving Cartesian grids

    NASA Astrophysics Data System (ADS)

    English, R. Elliot; Qiu, Linhai; Yu, Yue; Fedkiw, Ronald

    2013-12-01

    We present a novel method for discretizing the incompressible Navier-Stokes equations on a multitude of moving and overlapping Cartesian grids each with an independently chosen cell size to address adaptivity. Advection is handled with first and second order accurate semi-Lagrangian schemes in order to alleviate any time step restriction associated with small grid cell sizes. Likewise, an implicit temporal discretization is used for the parabolic terms including Navier-Stokes viscosity which we address separately through the development of a method for solving the heat diffusion equations. The most intricate aspect of any such discretization is the method used in order to solve the elliptic equation for the Navier-Stokes pressure or that resulting from the temporal discretization of parabolic terms. We address this by first removing any degrees of freedom which duplicately cover spatial regions due to overlapping grids, and then providing a discretization for the remaining degrees of freedom adjacent to these regions. We observe that a robust second order accurate symmetric positive definite readily preconditioned discretization can be obtained by constructing a local Voronoi region on the fly for each degree of freedom in question in order to obtain both its stencil (logically connected neighbors) and stencil weights. Internal curved boundaries such as at solid interfaces are handled using a simple immersed boundary approach which is directly applied to the Voronoi mesh in both the viscosity and pressure solves. We independently demonstrate each aspect of our approach on test problems in order to show efficacy and convergence before finally addressing a number of common test cases for incompressible flow with stationary and moving solid bodies.

  14. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  15. CHARACTERIZATION OF DISCONTINUITIES IN HIGH-DIMENSIONAL STOCHASTIC PROBLEMS ON ADAPTIVE SPARSE GRIDS

    SciTech Connect

    Jakeman, John D; Archibald, Richard K; Xiu, Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for edge detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes optimal , in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  16. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  17. Adaptive multi-grid method for a periodic heterogeneous medium in 1-D

    SciTech Connect

    Fish, J.; Belsky, V.

    1995-12-31

    A multi-grid method for a periodic heterogeneous medium in 1-D is presented. Based on the homogenization theory special intergrid connection operators have been developed to imitate a low frequency response of the differential equations with oscillatory coefficients. The proposed multi-grid method has been proved to have a fast rate of convergence governed by the ratio q/(4-q), where oadaptive multiscale computational scheme is developed. By this technique a computational model entirely constructed on the scale of material heterogeneity is only used where it is necessary to do so, or as indicated by so called Microscale Reduction Error (MRE) indicators, while in the remaining portion of the problem domain, the medium is treated as homogeneous with effective properties. Such a posteriori MRE indicators and estimators are developed on the basis of assessing the validity of two-scale asymptotic expansion.

  18. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  19. Application of Open Loop H-Adaptation to an Unstructured Grid Tidal Flat Model

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.

    2008-12-01

    The complex topology of tidal flats presents a challenge to coastal ocean models. Recently, several models have been developed employing unstructured grids, which can provide the flexibility in mesh resolution required to resolve the complex bathymetry and coastline. However, the distribution of element size in the initial mesh can be somewhat arbitrary, and is in general the product of the operator tailoring the resolution to the underlying bathymetry and regions of interest. In this work, the flow solution from an idealized tidal flat application is used to drive an open loop h-adaptation of the mesh. The model used for this work is the Finite Volume Coastal Ocean Model (FVCOM), an open source, terrain following model. A background length scale distribution derived from model output is used to generate a new initial mesh for the model run, thus defining an iteration of the procedure. Several metrics for computing the background length scale will be examined. These include direct estimation of spatial discretization error using Richardson's extrapolation from a sequence of meshes as well as heuristics derived from gradients in the primitive variables. Examination of grid independence, computational efficiency, and performance of the scheme for idealized tidal flats with inclusion of morphodynamics will be discussed.

  20. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  1. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    SciTech Connect

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  2. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  3. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295

  4. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  5. Deficits in context-dependent adaptive coding of reward in schizophrenia.

    PubMed

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  6. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  7. Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results

    NASA Technical Reports Server (NTRS)

    Gombosi, Tamas I.; Powell, Kenneth G.; De Zeeuw, Darren L.

    1994-01-01

    The first results of an axisymmetric magnetohydrodynamic (MHD) model of the interaction of an expanding cometary atmosphere with the solar wind are presented. The model assumes that far upstream the plasma flow lines are parallel to the magnetic field vector. The effects of mass loading and ion-neutral friction are taken into account by the governing equations, whcih are solved on an adaptively refined unstructured grid using a Monotone Upstream Centered Schemes for Conservative Laws (MUSCL)-type numerical technique. The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the near nucleus of the comet. The main findingsare the following: (1) A shock is formed approximately = 0.45 Mkm upstream of the comet (its location is controlled by the sonic and Alfvenic Mach numbers of the ambient solar wind flow and by the cometary mass addition rate). (2) A contact surface is formed approximately = 5,600 km upstream of the nucleus separating an outward expanding cometary ionosphere from the nearly stagnating solar wind flow. The location of the contact surface is controlled by the upstream flow conditions, the mass loading rate and the ion-neutral drag. The contact surface is also the boundary of the diamagnetic cavity. (3) A closed inner shock terminates the supersonic expansion of the cometary ionosphere. This inner shock is closer to the nucleus on dayside than on the nightside.

  8. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  9. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  10. Adaptation of a neutron diffraction detector to coded aperture imaging

    SciTech Connect

    Vanier, P.E.; Forman, L.

    1997-02-01

    A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.

  11. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  12. DEMOCRITUS: An adaptive particle in cell (PIC) code for object-plasma interactions

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni

    2011-06-01

    A new method for the simulation of plasma materials interactions is presented. The method is based on the particle in cell technique for the description of the plasma and on the immersed boundary method for the description of the interactions between materials and plasma particles. A technique to adapt the local number of particles and grid adaptation are used to reduce the truncation error and the noise of the simulations, to increase the accuracy per unit cost. In the present work, the computational method is verified against known results. Finally, the simulation method is applied to a number of specific examples of practical scientific and engineering interest.

  13. Correctable noise of quantum-error-correcting codes under adaptive concatenation

    NASA Astrophysics Data System (ADS)

    Fern, Jesse

    2008-01-01

    We examine the transformation of noise under a quantum-error-correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum-fault-tolerant thresholds.

  14. Nanoparticle-dispersed metamaterial sensors for adaptive coded aperture imaging applications

    NASA Astrophysics Data System (ADS)

    Nehmetallah, Georges; Banerjee, Partha; Aylo, Rola; Rogers, Stanley

    2011-09-01

    We propose tunable single-layer and multi-layer (periodic and with defect) structures comprising nanoparticle dispersed metamaterials in suitable hosts, including adaptive coded aperture constructs, for possible Adaptive Coded Aperture Imaging (ACAI) applications such as in microbolometry, pressure/temperature sensors, and directed energy transfer, over a wide frequency range, from visible to terahertz. These structures are easy to fabricate, are low-cost and tunable, and offer enhanced functionality, such as perfect absorption (in the case of bolometry) and low cross-talk (for sensors). Properties of the nanoparticle dispersed metamaterial are determined using effective medium theory.

  15. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  16. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  17. Computation of shock waves in media with an interphase boundary by the CIP-CUP method on an adaptive grid

    NASA Astrophysics Data System (ADS)

    Guseva, T. S.

    2016-01-01

    A numerical technique of computing shock waves in compressible media with movable deforming interphase boundaries including those of the gas-liquid type has been realized. The approach without explicit separation of the interphase boundary is applied. The CIP-CUP method is used for integrating the equations of gas dynamics. An adaptive grid of special kind (the soroban-grid) is utilized. Some results of testing the technique using one- and two-dimensional problems are given. Results of computation of impact of a jet on a thin liquid layer on a wall are presented.

  18. Code division controlled-MAC in wireless sensor network by adaptive binary signature design

    NASA Astrophysics Data System (ADS)

    Wei, Lili; Batalama, Stella N.; Pados, Dimitris A.; Suter, Bruce

    2007-04-01

    We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio) criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect to signature length. Numerical and simulation studies demonstrate the performance of the proposed method and offer comparisons with conventional signature code sets.

  19. A Domain-Decomposed Multi-Level Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)

    1998-01-01

    The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.

  20. Features of CPB: A Poisson-Boltzmann Solver that Uses an Adaptive Cartesian Grid

    PubMed Central

    Harris, Robert C.; Mackoy, Travis

    2014-01-01

    The capabilities of an adaptive Cartesian grid (ACG)-based Poisson-Boltzmann (PB) solver (CPB) are demonstrated. CPB solves various PB equations with an ACG, built from a hierarchical octree decomposition of the computational domain. This procedure decreases the number of points required, thereby reducing computational demands. Inside the molecule, CPB solves for the reaction-field component (ϕrf) of the electrostatic potential (ϕ), eliminating the charge-induced singularities in ϕ. CPB can also use a least-squares reconstruction method to improve estimates of ϕ at the molecular surface. All surfaces, which include solvent excluded, Gaussians and others, are created analytically, eliminating errors associated with triangulated surfaces. These features allow CPB to produce detailed surface maps of ϕ and compute polar solvation and binding free energies for large biomolecular assemblies, such as ribosomes and viruses, with reduced computational demands compared to other PBE solvers. The reader is referred to http://www.continuum-dynamics.com/solution-mm.html for how to obtain the CPB software. PMID:25430617

  1. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    NASA Astrophysics Data System (ADS)

    Ma, Xiang; Zabaras, Nicholas

    2009-03-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.

  2. Features of CPB: a Poisson-Boltzmann solver that uses an adaptive Cartesian grid.

    PubMed

    Fenley, Marcia O; Harris, Robert C; Mackoy, Travis; Boschitsch, Alexander H

    2015-02-01

    The capabilities of an adaptive Cartesian grid (ACG)-based Poisson-Boltzmann (PB) solver (CPB) are demonstrated. CPB solves various PB equations with an ACG, built from a hierarchical octree decomposition of the computational domain. This procedure decreases the number of points required, thereby reducing computational demands. Inside the molecule, CPB solves for the reaction-field component (ϕrf ) of the electrostatic potential (ϕ), eliminating the charge-induced singularities in ϕ. CPB can also use a least-squares reconstruction method to improve estimates of ϕ at the molecular surface. All surfaces, which include solvent excluded, Gaussians, and others, are created analytically, eliminating errors associated with triangulated surfaces. These features allow CPB to produce detailed surface maps of ϕ and compute polar solvation and binding free energies for large biomolecular assemblies, such as ribosomes and viruses, with reduced computational demands compared to other Poisson-Boltzmann equation solvers. The reader is referred to http://www.continuum-dynamics.com/solution-mm.html for how to obtain the CPB software. PMID:25430617

  3. Real-space grids and the Octopus code as tools for the development of new simulation approaches for electronic systems.

    PubMed

    Andrade, Xavier; Strubbe, David; De Giovannini, Umberto; Larsen, Ask Hjorth; Oliveira, Micael J T; Alberdi-Rodriguez, Joseba; Varas, Alejandro; Theophilou, Iris; Helbig, Nicole; Verstraete, Matthieu J; Stella, Lorenzo; Nogueira, Fernando; Aspuru-Guzik, Alán; Castro, Alberto; Marques, Miguel A L; Rubio, Angel

    2015-12-21

    Real-space grids are a powerful alternative for the simulation of electronic systems. One of the main advantages of the approach is the flexibility and simplicity of working directly in real space where the different fields are discretized on a grid, combined with competitive numerical performance and great potential for parallelization. These properties constitute a great advantage at the time of implementing and testing new physical models. Based on our experience with the Octopus code, in this article we discuss how the real-space approach has allowed for the recent development of new ideas for the simulation of electronic systems. Among these applications are approaches to calculate response properties, modeling of photoemission, optimal control of quantum systems, simulation of plasmonic systems, and the exact solution of the Schrödinger equation for low-dimensionality systems. PMID:25721500

  4. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  5. Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons.

    PubMed

    Ralston, Bridget N; Flagg, Lucas Q; Faggin, Eric; Birmingham, John T

    2016-06-01

    For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the "history"). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates "history" into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106

  6. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  7. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  8. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  9. A low order flow/acoustics interaction method for the prediction of sound propagation using 3D adaptive hybrid grids

    SciTech Connect

    Kallinderis, Yannis; Vitsas, Panagiotis A.; Menounou, Penelope

    2012-07-15

    A low-order flow/acoustics interaction method for the prediction of sound propagation and diffraction in unsteady subsonic compressible flow using adaptive 3-D hybrid grids is investigated. The total field is decomposed into the flow field described by the Euler equations, and the acoustics part described by the Nonlinear Perturbation Equations. The method is shown capable of predicting monopole sound propagation, while employment of acoustics-guided adapted grid refinement improves the accuracy of capturing the acoustic field. Interaction of sound with solid boundaries is also examined in terms of reflection, and diffraction. Sound propagation through an unsteady flow field is examined using static and dynamic flow/acoustics coupling demonstrating the importance of the latter.

  10. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  11. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  12. Performance of Adaptive Trellis Coded Modulation Applied to MC-CDMA with Bi-orthogonal Keying

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Yamasaki, Shoichiro; Haseyama, Miki

    A Generalized Symbol-rate-increased (GSRI) Pragmatic Adaptive Trellis Coded Modulation (ATCM) is applied to a Multi-carrier CDMA (MC-CDMA) system with bi-orthogonal keying is analyzed. The MC-CDMA considered in this paper is that the input sequence of a bi-orthogonal modulator has code selection bit sequence and sign bit sequence. In [9], an efficient error correction code using Reed-Solomon (RS) code for the code selection bit sequence has been proposed. However, since BPSK is employed for the sign bit modulation, no error correction code is applied to it. In order to realize a high speed wireless system, a multi-level modulation scheme (e.g. MPSK, MQAM, etc.) is desired. In this paper, we investigate the performance of the MC-CDMA with bi-orthogonal keying employing GSRI ATCM. GSRI TC-MPSK can arbitrarily set the bandwidth expansion ratio keeping higher coding gain than the conventional pragmatic TCM scheme. By changing the modulation scheme and the bandwidth expansion ratio (coding rate), this scheme can optimize the performance according to the channel conditions. The performance evaluations by simulations on an AWGN channel and multi-path fading channels are presented. It is shown that the proposed scheme has remarkable throughput performance than that of the conventional scheme.

  13. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  14. A neural mechanism for time-window separation resolves ambiguity of adaptive coding.

    PubMed

    Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan

    2015-03-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  15. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid. PMID:25910254

  16. Grid Work

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Pointwise Inc.'s, Gridgen Software is a system for the generation of 3D (three dimensional) multiple block, structured grids. Gridgen is a visually-oriented, graphics-based interactive code used to decompose a 3D domain into blocks, distribute grid points on curves, initialize and refine grid points on surfaces and initialize volume grid points. Gridgen is available to U.S. citizens and American-owned companies by license.

  17. Wind Farm Stabilization by using DFIG with Current Controlled Voltage Source Converters Taking Grid Codes into Consideration

    NASA Astrophysics Data System (ADS)

    Okedu, Kenneth Eloghene; Muyeen, S. M.; Takahashi, Rion; Tamura, Junji

    Recent wind farm grid codes require wind generators to ride through voltage sags, which means that normal power production should be re-initiated once the nominal grid voltage is recovered. However, fixed speed wind turbine generator system using induction generator (IG) has the stability problem similar to the step-out phenomenon of a synchronous generator. On the other hand, doubly fed induction generator (DFIG) can control its real and reactive powers independently while being operated in variable speed mode. This paper proposes a new control strategy using DFIGs for stabilizing a wind farm composed of DFIGs and IGs, without incorporating additional FACTS devices. A new current controlled voltage source converter (CC-VSC) scheme is proposed to control the converters of DFIG and the performance is verified by comparing the results with those of voltage controlled voltage source converter (VC-VSC) scheme. Another salient feature of this study is to reduce the number of proportionate integral (PI) controllers used in the rotor side converter without degrading dynamic and transient performances. Moreover, DC-link protection scheme during grid fault can be omitted in the proposed scheme which reduces overall cost of the system. Extensive simulation analyses by using PSCAD/EMTDC are carried out to clarify the effectiveness of the proposed CC-VSC based control scheme of DFIGs.

  18. Asynchrony adaptation reveals neural population code for audio-visual timing

    PubMed Central

    Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.

    2011-01-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905

  19. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  20. The role of overset grids in the development of the general purpose CFD code

    NASA Technical Reports Server (NTRS)

    Belk, Davy M.

    1995-01-01

    A discussion of the strengths and weaknesses of overset composite grid and solution technology is given, along with a sampling of current work in the area. Major trends are identified, and the observation is made that generalized and hybridized overset methods provide a natural framework for combining disparate mesh types and physics models. Because of this, the author concludes that overset methods will be the foundation for the general purpose computational fluid dynamics programs of the future.

  1. Edge equilibrium code for tokamaks

    SciTech Connect

    Li, Xujing; Drozdov, Vladimir V.

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  2. An adaptive discretization of compressible flow using a multitude of moving Cartesian grids

    NASA Astrophysics Data System (ADS)

    Qiu, Linhai; Lu, Wenlong; Fedkiw, Ronald

    2016-01-01

    We present a novel method for simulating compressible flow on a multitude of Cartesian grids that can rotate and translate. Following previous work, we split the time integration into an explicit step for advection followed by an implicit solve for the pressure. A second order accurate flux based scheme is devised to handle advection on each moving Cartesian grid using an effective characteristic velocity that accounts for the grid motion. In order to avoid the stringent time step restriction imposed by very fine grids, we propose strategies that allow for a fluid velocity CFL number larger than 1. The stringent time step restriction related to the sound speed is alleviated by formulating an implicit linear system in order to find a pressure consistent with the equation of state. This implicit linear system crosses overlapping Cartesian grid boundaries by utilizing local Voronoi meshes to connect the various degrees of freedom obtaining a symmetric positive-definite system. Since a straightforward application of this technique contains an inherent central differencing which can result in spurious oscillations, we introduce a new high order diffusion term similar in spirit to ENO-LLF but solved for implicitly in order to avoid any associated time step restrictions. The method is conservative on each grid, as well as globally conservative on the background grid that contains all other grids. Moreover, a conservative interpolation operator is devised for conservatively remapping values in order to keep them consistent across different overlapping grids. Additionally, the method is extended to handle two-way solid fluid coupling in a monolithic fashion including cases (in the appendix) where solids in close proximity do not properly allow for grid based degrees of freedom in between them.

  3. On the Numerical Dispersion of Electromagnetic Particle-In-Cell Code : Finite Grid Instability

    SciTech Connect

    Meyers, Michael David; Huang, Chengkun; Zeng, Yong; Yi, Sunghwan; Albright, Brian James

    2014-07-15

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the electromagnetic PIC algorithm to analyze the origin of these instabilities. We rigorously derive the faithful 3D numerical dispersion of the PIC algorithm, and then specialize to the Yee FDTD scheme. In particular, we account for the manner in which the PIC algorithm updates and samples the fields and distribution function. Temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme are also explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical 1D modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction.

  4. Rate-adaptive modulation and coding for optical fiber transmission systems

    NASA Astrophysics Data System (ADS)

    Gho, Gwang-Hyun; Kahn, Joseph M.

    2011-01-01

    Rate-adaptive optical transmission techniques adjust information bit rate based on transmission distance and other factors affecting signal quality. These techniques enable increased bit rates over shorter links, while enabling transmission over longer links when regeneration is not available. They are likely to become more important with increasing network traffic and a continuing evolution toward optically switched mesh networks, which make signal quality more variable. We propose a rate-adaptive scheme using variable-rate forward error correction (FEC) codes and variable constellations with a fixed symbol rate, quantifying how achievable bit rates vary with distance. The scheme uses serially concatenated Reed-Solomon codes and an inner repetition code to vary the code rate, combined with singlecarrier polarization-multiplexed M-ary quadrature amplitude modulation (PM-M-QAM) with variable M and digital coherent detection. A rate adaptation algorithm uses the signal-to-noise ratio (SNR) or the FEC decoder input bit-error ratio (BER) estimated by a receiver to determine the FEC code rate and constellation size that maximizes the information bit rate while satisfying a target FEC decoder output BER and an SNR margin, yielding a peak rate of 200 Gbit/s in a nominal 50-GHz channel bandwidth. We simulate single-channel transmission through a long-haul fiber system incorporating numerous optical switches, evaluating the impact of fiber nonlinearity and bandwidth narrowing. With zero SNR margin, we achieve bit rates of 200/100/50 Gbit/s over distances of 650/2000/3000 km. Compared to an ideal coding scheme, the proposed scheme exhibits a performance gap ranging from about 6.4 dB at 650 km to 7.5 dB at 5000 km.

  5. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  6. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    NASA Astrophysics Data System (ADS)

    Meyers, M. D.; Huang, C.-K.; Zeng, Y.; Yi, S. A.; Albright, B. J.

    2015-09-01

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  7. On the numerical dispersion of electromagnetic particle-in-cell code: Finite grid instability

    SciTech Connect

    Meyers, M.D.; Huang, C.-K.; Zeng, Y.; Yi, S.A.; Albright, B.J.

    2015-09-15

    The Particle-In-Cell (PIC) method is widely used in relativistic particle beam and laser plasma modeling. However, the PIC method exhibits numerical instabilities that can render unphysical simulation results or even destroy the simulation. For electromagnetic relativistic beam and plasma modeling, the most relevant numerical instabilities are the finite grid instability and the numerical Cherenkov instability. We review the numerical dispersion relation of the Electromagnetic PIC model. We rigorously derive the faithful 3-D numerical dispersion relation of the PIC model, for a simple, direct current deposition scheme, which does not conserve electric charge exactly. We then specialize to the Yee FDTD scheme. In particular, we clarify the presence of alias modes in an eigenmode analysis of the PIC model, which combines both discrete and continuous variables. The manner in which the PIC model updates and samples the fields and distribution function, together with the temporal and spatial phase factors from solving Maxwell's equations on the Yee grid with the leapfrog scheme, is explicitly accounted for. Numerical solutions to the electrostatic-like modes in the 1-D dispersion relation for a cold drifting plasma are obtained for parameters of interest. In the succeeding analysis, we investigate how the finite grid instability arises from the interaction of the numerical modes admitted in the system and their aliases. The most significant interaction is due critically to the correct representation of the operators in the dispersion relation. We obtain a simple analytic expression for the peak growth rate due to this interaction, which is then verified by simulation. We demonstrate that our analysis is readily extendable to charge conserving models.

  8. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  9. REMAP: A computer code that transfers node information between dissimilar grids

    SciTech Connect

    Shapiro, A.B.

    1990-04-01

    REMAP is a computer code that transfers the axisymmetric, two dimensional planar, or three dimensional temperature field from one finite element mesh to another. The meshes may be arbitrary as far as the number of elements and their geometry. REMAP interpolates or extrapolates the node temperatures from the old mesh to the new mesh using linear, bilinear, or trilinear isoparametric finite element shape functions. REMAP is used to transfer the temperature field from a thermal analysis mesh to a more finely discretized structural analysis mesh when performing a thermal stress analysis. REMAP was designed to be used with the finite element heat transfer codes TOPAZ2D and TOPAZ3D, and the solid mechanics codes NIKE2D and NIKE3D. The I/O formats in REMAP can be easily modified to accept input from other codes (e.g., finite difference) and generate output files for other structural codes. REMAP can be used to transfer any scalar field variable between dissimilar finite element meshes. The idea of a coarse filter by a fine filter to determine which element from the old mesh contains a node point from the new mesh was used. The coarse filter determines a subset of elements from the old mesh that may contain the new node point. The fine filter determines the element that contains the new node point. REMAP uses the ray-surface intersection algorithm developed for the FACET code for the fine filter. This algorithm has the added capability to determine which element the node is closest to if the node point lies outside the perimeter of the old mesh. Once an element from the old mesh has been identified as containing or closest to the new node point, the natural coordinates for the node point are calculated. The isoparametric finite element shape functions are calculated next. These shape functions are then used to interpolate or extrapolate the temperatures from the nodes comprising the old element to the new node point.

  10. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  11. CRASH: A Block-adaptive-mesh Code for Radiative Shock Hydrodynamics—Implementation and Verification

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Tóth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.; Fryxell, B.; Drake, R. P.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  12. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  13. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  14. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  15. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  16. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  17. Less can be more: RNA-adapters may enhance coding capacity of replicators.

    PubMed

    de Boer, Folkert K; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to

  18. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  19. MAGNETIC GRID

    DOEpatents

    Post, R.F.

    1960-08-01

    An electronic grid is designed employing magnetic forces for controlling the passage of charged particles. The grid is particularly applicable to use in gas-filled tubes such as ignitrons. thyratrons, etc., since the magnetic grid action is impartial to the polarity of the charged particles and, accordingly. the sheath effects encountered with electrostatic grids are not present. The grid comprises a conductor having sections spaced apart and extending in substantially opposite directions in the same plane, the ends of the conductor being adapted for connection to a current source.

  20. TURBOGRID - Turbomachinery applications of grid generation

    NASA Astrophysics Data System (ADS)

    Soni, Bharat K.; Shih, Ming-Hsin

    1990-07-01

    Numerical grid generation algorithm associated with the field region about turbomachinery systems is presented. The algorithm is incorporated as a module, TIGER (Turbomachinery Interactive Grid genERation) of the modular general purpose computer code GENIE. Interactive definitions of the mathematical description of blades, hub and shroud with minimal user interactions, adaption of the weighted transfinite interpolation technique for efficient generation of grid blocks/zones, automatic construction of the Bezier curves to accomplish slope continuity, and efficient utilization of IRIS-graphics capabilities are the salient features of this algorithm which results in a significant time savings for a given turbomachinery geometry-grid application.

  1. Adaptive analog-SSOR iterative method for solving grid equations with nonselfadjoint operators

    NASA Astrophysics Data System (ADS)

    Alekseenko, Elena; Sukhinov, Alexander; Chistyakov, Alexander; Shishenya, Alexander; Roux, Bernard

    2013-04-01

    Motion models of wave processes in the coastal zone are highly demanded in the projection and construction of coastal surface structures and breakwaters, and also as a component of other models. The most common of the grid approaches is currently vof-method. A significant drawback of this method is in the necessity to solve the convection equation to find fullness of cells. The numerical solution of this equation leads to a strong grid viscosity and "smearing" of the interface. In this paper, we propose a method, which is based on the idea of using a fill, as in vof method, but its conversion is not required to solve the equation of convection. Thus in this work, a mathematical model for the wave hydrodynamics problem, describing wash ashore and taking into account such physical parameters as turbulent exchange, complexity of domain and coastal line geometry, and bottom friction is developed. For the given mathematical model a discrete model is constructed, taking into account dynamical changing of the calculation domain. Discretization of the model is performed on the structured rectangular grid with a new developed finite-volume technique that takes into account fullness of the grid cells that allows describing geometry more accurate. Proposed technique allows improving the real accuracy of a solution in case of complex domain geometry, by improving approximation of the boundary. A software implementation and numerical experiments of the posed problem of the wave hydrodynamics is performed. The results of numerical experiments show the feasibility of using discrete mathematical models of processes that take into account fullness of grid cells, for the simulation of systems with complex geometry of the border. Numerical experiments show that the use of this technique sufficiently smooth solutions are obtained even on coarse grids.

  2. Grid generation strategies for turbomachinery configurations

    NASA Astrophysics Data System (ADS)

    Lee, K. D.; Henderson, T. L.

    1991-01-01

    Turbomachinery flow fields involve unique grid generation issues due to their geometrical and physical characteristics. Several strategic approaches are discussed to generate quality grids. The grid quality is further enhanced through blending and adapting. Grid blending smooths the grids locally through averaging and diffusion operators. Grid adaptation redistributes the grid points based on a grid quality assessment. These methods are demonstrated with several examples.

  3. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    SciTech Connect

    Jablonowski, Christiane

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  4. Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators

    PubMed Central

    de Boer, Folkert K.; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in

  5. Adaptive inter color residual prediction for efficient red-green-blue intra coding

    NASA Astrophysics Data System (ADS)

    Jeong, Jinwoo; Choe, Yoonsik; Kim, Yong-Goo

    2011-07-01

    Intra coding of an RGB video is important to many high fidelity multimedia applications because video acquisition is mostly done in RGB space, and the coding of decorrelated color video loses its virtue in high quality ranges. In order to improve the compression performance of an RGB video, this paper proposes an inter color prediction using adaptive weights. For making full use of spatial, as well as inter color correlation of an RGB video, the proposed scheme is based on a residual prediction approach, and thus the incorporated prediction is performed on the transformed frequency components of spatially predicted residual data of each color plane. With the aid of efficient prediction employing frequency domain inter color residual correlation, the proposed scheme achieves up to 24.3% of bitrate reduction, compared to the common mode of H.264/AVC high 4:4:4 intra profile.

  6. AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2011-04-01

    AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.

  7. Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki

    One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.

  8. An experimental infrared sensor using adaptive coded apertures for enhanced resolution

    NASA Astrophysics Data System (ADS)

    Gordon, Neil T.; de Villiers, Geoffrey D.; Ridley, Kevin D.; Bennett, Charlotte R.; McNie, Mark E.; Proudler, Ian K.; Russell, Lee; Slinger, Christopher W.; Gilholm, Kevin

    2010-08-01

    Adaptive coded aperture imaging (ACAI) has the potential to enhance greatly the performance of sensing systems by allowing sub detector pixel image and tracking resolution. A small experimental system has been set up to allow the practical demonstration of these benefits in the mid infrared, as well as investigating the calibration and stability of the system. The system can also be used to test modeling of similar ACAI systems in the infrared. The demonstrator can use either a set of fixed masks or a novel MOEMS adaptive transmissive spatial light modulator. This paper discusses the design and testing of the system including the development of novel decoding algorithms and some initial imaging results are presented.

  9. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  10. Effects of Selective Adaptation on Coding Sugar and Salt Tastes in Mixtures

    PubMed Central

    Goyert, Holly F.; Formaker, Bradley K.; Hettinger, Thomas P.

    2012-01-01

    Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of “salt + sugar” mixtures. In 4 sessions, 16 adapt–test stimulus pairs were presented as atomized, 150-μL “taste puffs” to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, “NaCl + sucrose,” and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt–sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations. PMID:22562765

  11. MARE2DEM: an open-source code for anisotropic inversion of controlled-source electromagnetic and magnetotelluric data using parallel adaptive 2D finite elements (Invited)

    NASA Astrophysics Data System (ADS)

    Key, K.

    2013-12-01

    This work announces the public release of an open-source inversion code named MARE2DEM (Modeling with Adaptively Refined Elements for 2D Electromagnetics). Although initially designed for the rapid inversion of marine electromagnetic data, MARE2DEM now supports a wide variety of acquisition configurations for both offshore and onshore surveys that utilize electric and magnetic dipole transmitters or magnetotelluric plane waves. The model domain is flexibly parameterized using a grid of arbitrarily shaped polygonal regions, allowing for complicated structures such as topography or seismically imaged horizons to be easily assimilated. MARE2DEM efficiently solves the forward problem in parallel by dividing the input data parameters into smaller subsets using a parallel data decomposition algorithm. The data subsets are then solved in parallel using an automatic adaptive finite element method that iterative solves the forward problem on successively refined finite element meshes until a specified accuracy tolerance is met, thus freeing the end user from the burden of designing an accurate numerical modeling grid. Regularized non-linear inversion for isotropic or anisotropic conductivity is accomplished with a new implementation of Occam's method referred to as fast-Occam, which is able to minimize the objective function in much fewer forward evaluations than the required by the original method. This presentation will review the theoretical considerations behind MARE2DEM and use a few recent offshore EM data sets to demonstrate its capabilities and to showcase the software interface tools that streamline model building and data inversion.

  12. An adaptive quadrature-free implementation of the high-order spectral volume method on unstructured grids

    NASA Astrophysics Data System (ADS)

    Harris, Robert Evan

    2008-10-01

    An efficient implementation of the high-order spectral volume (SV) method is presented for multi-dimensional conservation laws on unstructured grids. In the SV method, each simplex cell is called a spectral volume (SV), and the SV is further subdivided into polygonal (2D), or polyhedral (3D) control volumes (CVs) to support high-order data reconstructions. In the traditional implementation, Gauss quadrature formulas are used to approximate the flux integrals on all faces. In the new approach, a nodal set is selected and used to reconstruct a high-order polynomial approximation for the flux vector, and then the flux integrals on the internal faces are computed analytically, without the need for Gauss quadrature formulas. This gives a significant advantage over the traditional SV method in efficiency and ease of implementation. Fundamental properties of the new SV implementation are studied and high-order accuracy is demonstrated for linear and nonlinear advection equations, and the Euler equations. The new quadrature-free approach is then extended to handle local adaptive hp-refinement (grid and order refinement). Efficient edge-based adaptation utilizing a binary tree search algorithm is employed. Several different adaptation criteria which focus computational effort near high gradient regions are presented. Both h- and p-refinements are presented in a general framework where it is possible to perform either or both on any grid cell at any time. Several well-known inviscid flow test cases, subjected to various levels of adaptation, are utilized to demonstrate the effectiveness of the method. An analysis of the accuracy and stability properties of the spectral volume (SV) method is then presented. The current work seeks to address the issue of stability, as well as polynomial quality, in the design of SV partitions. A new approach is presented, which efficiently locates stable partitions by means of constrained minimization. Once stable partitions are located, a

  13. COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    Prusa, Joseph

    2012-05-08

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the physics of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer- reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.

  14. GridMan: A grid manipulation system

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.; Wang, Zhu

    1992-01-01

    GridMan is an interactive grid manipulation system. It operates on grids to produce new grids which conform to user demands. The input grids are not constrained to come from any particular source. They may be generated by algebraic methods, elliptic methods, hyperbolic methods, parabolic methods, or some combination of methods. The methods are included in the various available structured grid generation codes. These codes perform the basic assembly function for the various elements of the initial grid. For block structured grids, the assembly can be quite complex due to a large number of clock corners, edges, and faces for which various connections and orientations must be properly identified. The grid generation codes are distinguished among themselves by their balance between interactive and automatic actions and by their modest variations in control. The basic form of GridMan provides a much more substantial level of grid control and will take its input from any of the structured grid generation codes. The communication link to the outside codes is a data file which contains the grid or section of grid.

  15. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  16. An adaptive coded aperture imager: building, testing and trialing a super-resolving terrestrial demonstrator

    NASA Astrophysics Data System (ADS)

    Slinger, Christopher W.; Bennett, Charlotte R.; Dyer, Gavin; Gilholm, Kevin; Gordon, Neil; Huckridge, David; McNie, Mark; Penney, Richard W.; Proudler, Ian K.; Rice, Kevin; Ridley, Kevin D.; Russell, Lee; de Villiers, Geoffrey D.; Watson, Philip J.

    2011-09-01

    There is an increasingly important requirement for day and night, wide field of view imaging and tracking for both imaging and sensing applications. Applications include military, security and remote sensing. We describe the development of a proof of concept demonstrator of an adaptive coded-aperture imager operating in the mid-wave infrared to address these requirements. This consists of a coded-aperture mask, a set of optics and a 4k x 4k focal plane array (FPA). This system can produce images with a resolution better than that achieved by the detector pixel itself (i.e. superresolution) by combining multiple frames of data recorded with different coded-aperture mask patterns. This superresolution capability has been demonstrated both in the laboratory and in imaging of real-world scenes, the highest resolution achieved being ½ the FPA pixel pitch. The resolution for this configuration is currently limited by vibration and theoretically ¼ pixel pitch should be possible. Comparisons have been made between conventional and ACAI solutions to these requirements and show significant advantages in size, weight and cost for the ACAI approach.

  17. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex

    PubMed Central

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-01-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  18. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  19. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    PubMed

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  20. Impact of Load Balancing on Unstructured Adaptive Grid Computations for Distributed-Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak; Simon, Horst D.

    1996-01-01

    The computational requirements for an adaptive solution of unsteady problems change as the simulation progresses. This causes workload imbalance among processors on a parallel machine which, in turn, requires significant data movement at runtime. We present a new dynamic load-balancing framework, called JOVE, that balances the workload across all processors with a global view. Whenever the computational mesh is adapted, JOVE is activated to eliminate the load imbalance. JOVE has been implemented on an IBM SP2 distributed-memory machine in MPI for portability. Experimental results for two model meshes demonstrate that mesh adaption with load balancing gives more than a sixfold improvement over one without load balancing. We also show that JOVE gives a 24-fold speedup on 64 processors compared to sequential execution.

  1. mGrid: A load-balanced distributed computing environment for the remote execution of the user-defined Matlab code

    PubMed Central

    Karpievitch, Yuliya V; Almeida, Jonas S

    2006-01-01

    Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it

  2. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718

  3. Edge Equilibrium Code (EEC) For Tokamaks

    SciTech Connect

    Li, Xujling

    2014-02-24

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids

  4. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  5. ADAPTIVE-GRID SIMULATION OF GROUNDWATER FLOW IN HETEROGENEOUS AQUIFERS. (R825689C068)

    EPA Science Inventory

    Abstract

    The prediction of contaminant transport in porous media requires the computation of the flow velocity. This work presents a methodology for high-accuracy computation of flow in a heterogeneous isotropic formation, employing a dual-flow formulation and adaptive...

  6. Amino acids and our genetic code: a highly adaptive and interacting defense system.

    PubMed

    Verheesen, R H; Schweitzer, C M

    2012-04-01

    Since the discovery of the genetic code, Mendel's heredity theory and Darwin's evolution theory, science believes that adaptations to the environment are processes in which the adaptation of the genes is a matter of probability, in which finally the specie will survive which is evolved by chance. We hypothesize that evolution and the adaptation of the genes is a well-organized fully adaptive system in which there is no rigidity of the genes. The dividing of the genes will take place in line with the environment to be expected, sensed through the mother. The encoding triplets can encode for more than one amino acid depending on the availability of the amino acids and the needed micronutrients. Those nutrients can cause disease but also prevent diseases, even cancer and auto immunity. In fact we hypothesize that auto immunity is an effective process of the organism to clear suboptimal proteins, formed due to amino acid and micronutrient deficiencies. Only when deficiencies sustain, disease will develop, otherwise the autoantibodies will function as all antibodies function, in a protective way. Furthermore, we hypothesize that essential amino acids are less important than nonessential amino acid (NEA). Species developed the ability to produce the nonessential amino acids themselves because they were not provided by food sufficiently. In contrast essential amino acids are widely available, without any evolutionary pressure. Since we can only produce small amounts of NEA and the availability in food can be reasoned to be too low they are still our main concern in amino acid availability. In conclusion, we hypothesize that increasing health will only be possible by improving our natural environment and living circumstances, not by changing the genes, since they are our last line of defense in surviving our environmental changes. PMID:22289341

  7. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  8. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  9. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  10. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  11. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  12. The NASPE/BPEG generic pacemaker code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices.

    PubMed

    Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R

    1987-07-01

    A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363

  13. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  14. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  15. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  16. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  17. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  18. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  19. The fluid dynamic approach to equidistribution methods for grid generation and adaptation

    SciTech Connect

    Delzanno, Gian Luca; Finn, John M

    2009-01-01

    The equidistribution methods based on L{sub p} Monge-Kantorovich optimization [Finn and Delzanno, submitted to SISC, 2009] and on the deformation [Moser, 1965; Dacorogna and Moser, 1990, Liao and Anderson, 1992] method are analyzed primarily in the context of grid generation. It is shown that the first class of methods can be obtained from a fluid dynamic formulation based on time-dependent equations for the mass density and the momentum density, arising from a variational principle. In this context, deformation methods arise from a fluid formulation by making a specific assumption on the time evolution of the density (but with some degree of freedom for the momentum density). In general, deformation methods do not arise from a variational principle. However, it is possible to prescribe an optimal deformation method, related to L{sub 1} Monge-Kantorovich optimization, by making a further assumption on the momentum density. Some applications of the L{sub p} fluid dynamic formulation to imaging are also explored.

  20. Adaptive Generation of Multimaterial Grids from imaging data for Biomedical Lagrangian Fluid-Structure Simulations

    SciTech Connect

    Carson, James P.; Kuprat, Andrew P.; Jiao, Xiangmin; Dyedov, Volodymyr; del Pin, Facundo; Guccione, Julius M.; Ratcliffe, Mark B.; Einstein, Daniel R.

    2010-04-01

    Spatial discretization of complex imaging-derived fluid-solid geometries, such as the cardiac environment, is a critical but often overlooked challenge in biomechanical computations. This is particularly true in problems with Lagrangian interfaces, where, the fluid and solid phases must match geometrically. For simplicity and better accuracy, it is also highly desirable for the two phases to share the same surface mesh at the interface between them. We outline a method for solving this problem, and illustrate the approach with a 3D fluid-solid mesh of the mouse heart. An MRI perfusion-fixed dataset of a mouse heart with 50μm isotropic resolution was semi-automatically segmented using a customized multimaterial connected-threshold approach that divided the volume into non-overlapping regions of blood, tissue and background. Subsequently, a multimaterial marching cubes algorithm was applied to the segmented data to produce two detailed, compatible isosurfaces, one for blood and one for tissue. Both isosurfaces were simultaneously smoothed with a multimaterial smoothing algorithm that exactly conserves the volume for each phase. Using these two isosurfaces, we developed and applied novel automated meshing algorithms to generate anisotropic hybrid meshes on arbitrary biological geometries with the number of layers and the desired element anisotropy for each phase as the only input parameters. Since our meshes adapt to the local feature sizes and include boundary layer prisms, they are more efficient and accurate than non-adaptive, isotropic meshes, and the fluid-structure interaction computations will tend to have relative error equilibrated over the whole mesh.

  1. Dynamic Power Grid Simulation

    2015-09-14

    GridDyn is a part of power grid simulation toolkit. The code is designed using modern object oriented C++ methods utilizing C++11 and recent Boost libraries to ensure compatibility with multiple operating systems and environments.

  2. Advanced Unstructured Grid Generation for Complex Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new approach for distribution of grid points on the surface and in the volume has been developed and implemented in the NASA unstructured grid generation code VGRID. In addition to the point and line sources of prior work, the new approach utilizes surface and volume sources for automatic curvature-based grid sizing and convenient point distribution in the volume. A new exponential growth function produces smoother and more efficient grids and provides superior control over distribution of grid points in the field. All types of sources support anisotropic grid stretching which not only improves the grid economy but also provides more accurate solutions for certain aerodynamic applications. The new approach does not require a three-dimensional background grid as in the previous methods. Instead, it makes use of an efficient bounding-box auxiliary medium for storing grid parameters defined by surface sources. The new approach is less memory-intensive and more efficient computationally. The grids generated with the new method either eliminate the need for adaptive grid refinement for certain class of problems or provide high quality initial grids that would enhance the performance of many adaptation methods.

  3. Computational Aerothermodynamic Simulation Issues on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; White, Jeffery A.

    2004-01-01

    The synthesis of physical models for gas chemistry and turbulence from the structured grid codes LAURA and VULCAN into the unstructured grid code FUN3D is described. A directionally Symmetric, Total Variation Diminishing (STVD) algorithm and an entropy fix (eigenvalue limiter) keyed to local cell Reynolds number are introduced to improve solution quality for hypersonic aeroheating applications. A simple grid-adaptation procedure is incorporated within the flow solver. Simulations of flow over an ellipsoid (perfect gas, inviscid), Shuttle Orbiter (viscous, chemical nonequilibrium) and comparisons to the structured grid solvers LAURA (cylinder, Shuttle Orbiter) and VULCAN (flat plate) are presented to show current capabilities. The quality of heating in 3D stagnation regions is very sensitive to algorithm options in general, high aspect ratio tetrahedral elements complicate the simulation of high Reynolds number, viscous flow as compared to locally structured meshes aligned with the flow.

  4. Development of an Atmospheric Climate Model with Self-Adapting Grid and Physics

    SciTech Connect

    Penner, Joyce E.

    2013-08-10

    This project was targeting the development of a computational approach that would allow resolving cloud processes on small-scales within the framework of the most recent version of the NASA/NCAR Finite-Volume Community Atmospheric Model (FVCAM). The FVCAM is based on the multidimensional Flux-Form Semi-Lagrangian (FFSL) dynamical core and uses a ?vertically Lagrangian? finite-volume (FV) representation of the model equations with a mass-conserving re-mapping algorithm. The Lagrangian coordinate requires a remapping of the Lagrangian volume back to Eulerian coordinates to restore the original resolution and keep the mesh from developing distortions such as layers with overlapping interfaces. The main objectives of the project were, first, to develop the 3D library which allows refinement and coarsening of the model domain in spherical coordinates, and second, to develop a non-hydrostatic code for calculation of the model variables within the refined areas that could be seamlessly incorporated with the hydrostatic finite volume dynamical core when higher resolution is wanted. We also updated the aerosol simulation model in CAM in order to ready the model for the treatment of aerosol/cloud interactions.

  5. Two general methods for population pharmacokinetic modeling: non-parametric adaptive grid and non-parametric Bayesian

    PubMed Central

    Neely, Michael; Bartroff, Jay; van Guilder, Michael; Yamada, Walter; Bayard, David; Jelliffe, Roger; Leary, Robert; Chubatiuk, Alyona; Schumitzky, Alan

    2013-01-01

    Population pharmacokinetic (PK) modeling methods can be statistically classified as either parametric or nonparametric (NP). Each classification can be divided into maximum likelihood (ML) or Bayesian (B) approazches. In this paper we discuss the nonparametric case using both maximum likelihood and Bayesian approaches. We present two nonparametric methods for estimating the unknown joint population distribution of model parameter values in a pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a stick-breaking process to construct a Dirichlet prior. Our objective is to compare the performance of these two methods using a simulated PK/PD dataset. Our results showed excellent performance of NPAG and NPB in a realistically simulated PK study. This simulation allowed us to have benchmarks in the form of the true population parameters to compare with the estimates produced by the two methods, while incorporating challenges like unbalanced sample times and sample numbers as well as the ability to include the covariate of patient weight. We conclude that both NPML and NPB can be used in realistic PK/PD population analysis problems. The advantages of one versus the other are discussed in the paper. NPAG and NPB are implemented in R and freely available for download within the Pmetrics package from www.lapk.org. PMID:23404393

  6. Utilizing micro-electro-mechanical systems (MEMS) micro-shutter designs for adaptive coded aperture imaging (ACAI) technologies

    NASA Astrophysics Data System (ADS)

    Ledet, Mary M.; Starman, LaVern A.; Coutu, Ronald A., Jr.; Rogers, Stanley

    2009-08-01

    Coded aperture imaging (CAI) has been used in both the astronomical and medical communities for years due to its ability to image light at short wavelengths and thus replacing conventional lenses. Where CAI is limited, adaptive coded aperture imaging (ACAI) can recover what is lost. The use of photonic micro-electro-mechanical-systems (MEMS) for creating adaptive coded apertures has been gaining momentum since 2007. Successful implementation of micro-shutter technologies would potentially enable the use of adaptive coded aperture imaging and non-imaging systems in current and future military surveillance and intelligence programs. In this effort, a prototype of MEMS microshutters has been designed and fabricated onto a 3 mm x 3 mm square of silicon substrate using the PolyMUMPSTM process. This prototype is a line-drivable array using thin flaps of polysilicon to cover and uncover an 8 x 8 array of 20 μm apertures. A characterization of the micro-shutters to include mechanical, electrical and optical properties is provided. This prototype, its actuation scheme, and other designs for individual microshutters have been modeled and studied for feasibility purposes. In addition, microshutters fabricated from an Al-Au alloy on a quartz wafer were optically tested and characterized with a 632 nm HeNe laser.

  7. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  8. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  9. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    NASA Technical Reports Server (NTRS)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  10. Towards Hybrid Overset Grid Simulations of the Launch Environment

    NASA Astrophysics Data System (ADS)

    Moini-Yekta, Shayan

    A hybrid overset grid approach has been developed for the design and analysis of launch vehicles and facilities in the launch environment. The motivation for the hybrid grid methodology is to reduce the turn-around time of computational fluid dynamic simulations and improve the ability to handle complex geometry and flow physics. The LAVA (Launch Ascent and Vehicle Aerodynamics) hybrid overset grid scheme consists of two components: an off-body immersed-boundary Cartesian solver with block-structured adaptive mesh refinement and a near-body unstructured body-fitted solver. Two-way coupling is achieved through overset connectivity between the off-body and near-body grids. This work highlights verification using code-to-code comparisons and validation using experimental data for the individual and hybrid solver. The hybrid overset grid methodology is applied to representative unsteady 2D trench and 3D generic rocket test cases.

  11. A Peak Power Reduction Method with Adaptive Inversion of Clustered Parity-Carriers in BCH-Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Muta, Osamu; Akaiwa, Yoshihiko

    In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.

  12. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  13. A multiblock/multizone code (PAB 3D-v2) for the three-dimensional Navier-Stokes equations: Preliminary applications

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1990-01-01

    The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.

  14. Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

    PubMed Central

    Latinus, Marianne; Belin, Pascal

    2011-01-01

    We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384

  15. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  16. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  17. Bipartite geminivirus host adaptation determined cooperatively by coding and noncoding sequences of the genome.

    PubMed

    Petty, I T; Carter, S C; Morra, M R; Jeffrey, J L; Olivey, H E

    2000-11-25

    Bipartite geminiviruses are small, plant-infecting viruses with genomes composed of circular, single-stranded DNA molecules, designated A and B. Although they are closely related genetically, individual bipartite geminiviruses frequently exhibit host-specific adaptation. Two such viruses are bean golden mosaic virus (BGMV) and tomato golden mosaic virus (TGMV), which are well adapted to common bean (Phaseolus vulgaris) and Nicotiana benthamiana, respectively. In previous studies, partial host adaptation was conferred on BGMV-based or TGMV-based hybrid viruses by separately exchanging open reading frames (ORFs) on DNA A or DNA B. Here we analyzed hybrid viruses in which all of the ORFs on both DNAs were exchanged except for AL1, which encodes a protein with strictly virus-specific activity. These hybrid viruses exhibited partial transfer of host-adapted phenotypes. In contrast, exchange of noncoding regions (NCRs) upstream from the AR1 and BR1 ORFs did not confer any host-specific gain of function on hybrid viruses. However, when the exchangeable ORFs and NCRs from TGMV were combined in a single BGMV-based hybrid virus, complete transfer of TGMV-like adaptation to N. benthamiana was achieved. Interestingly, the reciprocal TGMV-based hybrid virus displayed only partial gain of function in bean. This may be, in part, the result of defective virus-specific interactions between TGMV and BGMV sequences present in the hybrid, although a potential role in adaptation to bean for additional regions of the BGMV genome cannot be ruled out. PMID:11080490

  18. Using specific and adaptive arrangement of grid-type pilot in channel estimation for white-lightLED-based OFDM visible light communication system

    NASA Astrophysics Data System (ADS)

    Lin, Wan-Feng; Chow, Chi-Wai; Yeh, Chien-Hung

    2015-03-01

    Orthogonal frequency division multiplexing (OFDM) is a promising candidate for light emitting diode (LED)-based optical wireless communication (OWC); however, precise channel estimation is required for synchronization and equalization. In this work, we study and discover that the channel response of the white-lightLED-based OWC was smooth and stable. Hence we propose and demonstrate using a specific and adaptive arrangement of grid-type pilot scheme to estimate the LED OWC channel response. Experimental results show that our scheme can achieve better transmission performance and with some transmission capacity enhancement when compared with the method using training-symbol scheme (also called block-type pilot scheme).

  19. Fine-Granularity Loading Schemes Using Adaptive Reed-Solomon Coding for xDSL-DMT Systems

    NASA Astrophysics Data System (ADS)

    Panigrahi, Saswat; Le-Ngoc, Tho

    2006-12-01

    While most existing loading algorithms for xDSL-DMT systems strive for the optimal energy distribution to maximize their rate, the amounts of bits loaded to subcarriers are constrained to be integers and the associated granularity losses can represent a significant percentage of the achievable data rate, especially in the presence of the peak-power constraint. To recover these losses, we propose a fine-granularity loading scheme using joint optimization of adaptive modulation and flexible coding parameters based on programmable Reed-Solomon (RS) codes and bit-error probability criterion. Illustrative examples of applications to VDSL-DMT systems indicate that the proposed scheme can offer a rate increase of about[InlineEquation not available: see fulltext.] in most cases as compared to various existing integer-bit-loading algorithms. This improvement is in good agreement with the theoretical estimates developed to quantify the granularity loss.

  20. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; Center for Quantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  1. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  2. Adaptive coding of orofacial and speech actions in motor and somatosensory spaces with and without overt motor behavior.

    PubMed

    Sato, Marc; Vilain, Coriandre; Lamalle, Laurent; Grabski, Krystyna

    2015-02-01

    Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback. PMID:25203272

  3. Parallelization of TWOPORFLOW, a Cartesian Grid based Two-phase Porous Media Code for Transient Thermo-hydraulic Simulations

    NASA Astrophysics Data System (ADS)

    Trost, Nico; Jiménez, Javier; Imke, Uwe; Sanchez, Victor

    2014-06-01

    TWOPORFLOW is a thermo-hydraulic code based on a porous media approach to simulate single- and two-phase flow including boiling. It is under development at the Institute for Neutron Physics and Reactor Technology (INR) at KIT. The code features a 3D transient solution of the mass, momentum and energy conservation equations for two inter-penetrating fluids with a semi-implicit continuous Eulerian type solver. The application domain of TWOPORFLOW includes the flow in standard porous media and in structured porous media such as micro-channels and cores of nuclear power plants. In the latter case, the fluid domain is coupled to a fuel rod model, describing the heat flow inside the solid structure. In this work, detailed profiling tools have been utilized to determine the optimization potential of TWOPORFLOW. As a result, bottle-necks were identified and reduced in the most feasible way, leading for instance to an optimization of the water-steam property computation. Furthermore, an OpenMP implementation addressing the routines in charge of inter-phase momentum-, energy- and mass-coupling delivered good performance together with a high scalability on shared memory architectures. In contrast to that, the approach for distributed memory systems was to solve sub-problems resulting by the decomposition of the initial Cartesian geometry. Thread communication for the sub-problem boundary updates was accomplished by the Message Passing Interface (MPI) standard.

  4. Coupling a local adaptive grid refinement technique with an interface sharpening scheme for the simulation of two-phase flow and free-surface flows using VOF methodology

    NASA Astrophysics Data System (ADS)

    Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis

    2015-11-01

    This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.

  5. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  6. Hybrid Grid Generation Using NW Grid

    SciTech Connect

    Jones-Oliveira, Janet B.; Oliveira, Joseph S.; Trease, Lynn L.; Trease, Harold E.; B.K. Soni, J. Hauser, J.F. Thompson, P.R. Eiseman

    2000-09-01

    We describe the development and use of a hybrid n-dimensional grid generation system called NWGRID. The Applied Mathematics Group at Pacific Northwest National Laboratory (PNNL) is developing this tool to support the Laboratory's computational science efforts in chemistry, biology, engineering and environmental (subsurface and atmospheric) modeling. NWGRID is the grid generation system, which is designed for multi-scale, multi-material, multi-physics, time-dependent, 3-D, hybrid grids that are either statically adapted or evolved in time. NWGRID'S capabilities include static and dynamic grids, hybrid grids, managing colliding surfaces, and grid optimization[using reconnections, smoothing, and adaptive mesh refinement (AMR) algorithms]. NWGRID'S data structure can manage an arbitrary number of grid objects, each with an arbitrary number of grid attributes. NWGRID uses surface geometry to build volumes by using combinations of Boolean operators and order relations. Point distributions can be input, generated using either ray shooting techniques or defined point-by-point. Connectivity matrices are then generated automatically for all variations of hybrid grids.

  7. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  8. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  9. Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission

    NASA Technical Reports Server (NTRS)

    Hutchins, S. E.; Batson, B. H.

    1975-01-01

    This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.

  10. Scientific Final Report: COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    William J. Gutowski; Joseph M. Prusa, Piotr K. Smolarkiewicz

    2012-04-09

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the 'physics' of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited.

  11. Adaptive Colour Contrast Coding in the Salamander Retina Efficiently Matches Natural Scene Statistics

    PubMed Central

    Vasserman, Genadiy; Schneidman, Elad; Segev, Ronen

    2013-01-01

    The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue) stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level. PMID:24205373

  12. Application of Parallel Adjoint-Based Error Estimation and Anisotropic Grid Adaptation for Three-Dimensional Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.

    2005-01-01

    This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge

  13. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337

  14. A 2-D orientation-adaptive prediction filter in lifting structures for image coding.

    PubMed

    Gerek, Omer N; Cetin, A Enis

    2006-01-01

    Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 5/3 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/-45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required. PMID:16435541

  15. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  16. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  17. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  18. Poisson-Boltzmann model for protein-surface electrostatic interactions and grid-convergence study using the PyGBe code

    NASA Astrophysics Data System (ADS)

    Cooper, Christopher D.; Barba, Lorena A.

    2016-05-01

    Interactions between surfaces and proteins occur in many vital processes and are crucial in biotechnology: the ability to control specific interactions is essential in fields like biomaterials, biomedical implants and biosensors. In the latter case, biosensor sensitivity hinges on ligand proteins adsorbing on bioactive surfaces with a favorable orientation, exposing reaction sites to target molecules. Protein adsorption, being a free-energy-driven process, is difficult to study experimentally. This paper develops and evaluates a computational model to study electrostatic interactions of proteins and charged nanosurfaces, via the Poisson-Boltzmann equation. We extended the implicit-solvent model used in the open-source code PyGBe to include surfaces of imposed charge or potential. This code solves the boundary integral formulation of the Poisson-Boltzmann equation, discretized with surface elements. PyGBe has at its core a treecode-accelerated Krylov iterative solver, resulting in O(N log N) scaling, with further acceleration on hardware via multi-threaded execution on GPUs. It computes solvation and surface free energies, providing a framework for studying the effect of electrostatics on adsorption. We derived an analytical solution for a spherical charged surface interacting with a spherical dielectric cavity, and used it in a grid-convergence study to build evidence on the correctness of our approach. The study showed the error decaying with the average area of the boundary elements, i.e., the method is O(1 / N) , which is consistent with our previous verification studies using PyGBe. We also studied grid-convergence using a real molecular geometry (protein G B1 D4‧), in this case using Richardson extrapolation (in the absence of an analytical solution) and confirmed the O(1 / N) scaling. With this work, we can now access a completely new family of problems, which no other major bioelectrostatics solver, e.g. APBS, is capable of dealing with. PyGBe is open

  19. An adaptive algorithm for removing the blocking artifacts in block-transform coded images

    NASA Astrophysics Data System (ADS)

    Yang, Jingzhong; Ma, Zheng

    2005-11-01

    JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.

  20. SLGRID: spectral synthesis software in the grid

    NASA Astrophysics Data System (ADS)

    Sabater, J.; Sánchez, S.; Verdes-Montenegro, L.

    2011-11-01

    SLGRID (http://www.e-ciencia.es/wiki/index.php/Slgrid) is a pilot project proposed by the e-Science Initiative of Andalusia (eCA) and supported by the Spanish e-Science Network in the frame of the European Grid Initiative (EGI). The aim of the project was to adapt the spectral synthesis software Starlight (Cid-Fernandes et al. 2005) to the Grid infrastructure. Starlight is used to estimate the underlying stellar populations (their ages and metallicities) using an optical spectrum, hence, it is possible to obtain a clean nebular spectrum that can be used for the diagnostic of the presence of an Active Galactic Nucleus (Sabater et al. 2008, 2009). The typical serial execution of the code for big samples of galaxies made it ideal to be integrated into the Grid. We obtain an improvement on the computational time of order N, being N the number of nodes available in the Grid. In a real case we obtained our results in 3 hours with SLGRID instead of the 60 days spent using Starlight in a PC. The code has already been ported to the Grid. The first tests were made within the e-CA infrastrusture and, later, itwas tested and improved with the colaboration of the CETA-CIEMAT. The SLGRID project has been recently renewed. In a future it is planned to adapt the code for the reduction of data from Integral Field Units where each dataset is composed of hundreds of spectra. Electronic version of the poster at http://www.iaa.es/~jsm/SEA2010

  1. Prediction and Control of Network Cascade: Example of Power Grid or Networking Adaptability from WMD Disruption and Cascading Failures

    SciTech Connect

    Chertkov, Michael

    2012-07-24

    The goal of the DTRA project is to develop a mathematical framework that will provide the fundamental understanding of network survivability, algorithms for detecting/inferring pre-cursors of abnormal network behaviors, and methods for network adaptability and self-healing from cascading failures.

  2. Multiple grid problems on concurrent-processing computers

    NASA Technical Reports Server (NTRS)

    Eberhardt, D. S.; Baganoff, D.

    1986-01-01

    Three computer codes were studied which make use of concurrent processing computer architectures in computational fluid dynamics (CFD). The three parallel codes were tested on a two processor multiple-instruction/multiple-data (MIMD) facility at NASA Ames Research Center, and are suggested for efficient parallel computations. The first code is a well-known program which makes use of the Beam and Warming, implicit, approximate factored algorithm. This study demonstrates the parallelism found in a well-known scheme and it achieved speedups exceeding 1.9 on the two processor MIMD test facility. The second code studied made use of an embedded grid scheme which is used to solve problems having complex geometries. The particular application for this study considered an airfoil/flap geometry in an incompressible flow. The scheme eliminates some of the inherent difficulties found in adapting approximate factorization techniques onto MIMD machines and allows the use of chaotic relaxation and asynchronous iteration techniques. The third code studied is an application of overset grids to a supersonic blunt body problem. The code addresses the difficulties encountered when using embedded grids on a compressible, and therefore nonlinear, problem. The complex numerical boundary system associated with overset grids is discussed and several boundary schemes are suggested. A boundary scheme based on the method of characteristics achieved the best results.

  3. A Load Frequency Control in an Off-Grid Sustainable Power System Based on a Parameter Adaptive PID-Type Fuzzy Controller

    NASA Astrophysics Data System (ADS)

    Ronilaya, Ferdian; Miyauchi, Hajime

    2014-10-01

    This paper presents a new implementation of a parameter adaptive PID-type fuzzy controller (PAPIDfc) for a grid-supporting inverter of battery to alleviate frequency fluctuations in a wind-diesel power system. A variable speed wind turbine that drives a permanent magnet synchronous generator is assumed for demonstrations. The PAPIDfc controller is built from a set of control rules that adopts the droop method and uses only locally measurable frequency signal. The output control signal is determined from the knowledge base and the fuzzy inference. The input-derivative gain and the output-integral gain of the PAPIDfc are tuned online. To ensure safe battery operating limits, we also propose a protection scheme called intelligent battery protection (IBP). Several simulation experiments are performed by using MATLAB®/SimPowersystems™. Next, to verify the scheme's effectiveness, the simulation results are compared with the results of conventional controllers. The results demonstrate the effectiveness of the PAPIDfc scheme to control a grid-supporting inverter of the battery in the reduction of frequency fluctuations.

  4. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  6. Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding

    NASA Astrophysics Data System (ADS)

    Yang, Shuping; Lin, Xinggang; Wang, Guijin

    2003-05-01

    Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.

  7. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  8. Discontinuous finite element solution of the radiation diffusion equation on arbitrary polygonal meshes and locally adapted quadrilateral grids

    SciTech Connect

    Ragusa, Jean C.

    2015-01-01

    In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement.

  9. Grid generation for turbomachinery problems

    NASA Technical Reports Server (NTRS)

    Steinhoff, J.; Reddy, K. C.

    1986-01-01

    The development of a computer code to generate numerical grids for complex internal flow systems such as the fluid flow inside the space shuttle main engine is outlined. The blending technique for generating a grid for stator-rotor combination at a particular radial section is examined. The computer programs which generate these grids are listed in the Appendices. These codes are capable of generating grids at different cross sections and thus providng three dimensional stator-rotor grids for the turbomachinery of the space shuttle main engine.

  10. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  11. CAGI: Computer Aided Grid Interface. A work in progress

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Yu, Tzu-Yi; Vaughn, David

    1992-01-01

    Progress realized in the development of a Computer Aided Grid Interface (CAGI) software system in integrating CAD/CAM geometric system output and/or Interactive Graphics Exchange Standard (IGES) files, geometry manipulations associated with grid generation, and robust grid generation methodologies is presented. CAGI is being developed in a modular fashion and will offer fast, efficient and economical response to geometry/grid preparation, allowing the ability to upgrade basic geometry in a step-by-step fashion interactively and under permanent visual control along with minimizing the differences between the actual hardware surface descriptions and corresponding numerical analog. The computer code GENIE is used as a basis. The Non-Uniform Rational B-Splines (NURBS) representation of sculptured surfaces is utilized for surface grid redistribution. The computer aided analysis system, PATRAN, is adapted as a CAD/CAM system. The progress realized in NURBS surface grid generation, the development of IGES transformer, and geometry adaption using PATRAN will be presented along with their applicability to grid generation associated with rocket propulsion applications.

  12. Automated grid generation from models of complex geologic structure and stratigraphy

    SciTech Connect

    Gable, C.; Trease, H.; Cherry, T.

    1996-04-01

    The construction of computational grids which accurately reflect complex geologic structure and stratigraphy for flow and transport models poses a formidable task. With an understanding of stratigraphy, material properties and boundary and initial conditions, the task of incorporating this data into a numerical model can be difficult and time consuming. Most GIS tools for representing complex geologic volumes and surfaces are not designed for producing optimal grids for flow and transport computation. We have developed a tool, GEOMESH, for generating finite element grids that maintain the geometric integrity of input volumes, surfaces, and geologic data and produce an optimal (Delaunay) tetrahedral grid that can be used for flow and transport computations. GEOMESH also satisfies the constraint that the geometric coupling coefficients of the grid are positive for all elements. GEOMESH generates grids for two dimensional cross sections, three dimensional regional models, represents faults and fractures, and has the capability of including finer grids representing tunnels and well bores into grids. GEOMESH also permits adaptive grid refinement in three dimensions. The tools to glue, merge and insert grids together demonstrate how complex grids can be built from simpler pieces. The resulting grid can be utilized by unstructured finite element or integrated finite difference computational physics codes.

  13. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  14. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better. PMID:20174010

  15. ON 3D, AUTOMATED, SELF-CONTAINED GRID GENERATION WITHIN THE RAGE CAMR HYDROCODE

    SciTech Connect

    Oakes, W.R.; Henning, P.J.; Gittings, M.L.; Weaver, R.P.

    2000-06-01

    We discuss using the inherent grid manipulation capability within a Continuously Adaptive Mesh Refinement hydrodynamics code, RAGE, to implement parallel, automated, self-contained grid generation. We show how arbitrarily complex 3D geometries specified in any unambiguous form can be used. The RAGE computational environment is any of several massively parallel computers being developed under the Department Of Energy's Accelerated Strategic Computing Initiative. A typical 3D RAGE analysis may contain 100 million cells and occupy 2000 processors for several weeks. RAGE grid generation is embarrassingly parallel. The RAGE computational grid is an octree decomposition of the model space. The problem domain is subdivided into as many subdomains as the number of processors assigned to the problem. The grid for each subdomain is then generated independently, except for occasional adjustments. Geometry used for initial grid generation includes CSG combinations of NURBS-based boundary representation models, stereo lithography (STL) files, implicit surfaces, and functionally perturbed surfaces.

  16. Dynamic fisheye grids for binary black hole simulations

    NASA Astrophysics Data System (ADS)

    Zilhão, Miguel; Noble, Scott C.

    2014-03-01

    We present a new warped gridding scheme adapted to simulating gas dynamics in binary black hole spacetimes. The grid concentrates grid points in the vicinity of each black hole to resolve the smaller scale structures there, and rarefies grid points away from each black hole to keep the overall problem size at a practical level. In this respect, our system can be thought of as a ‘double’ version of the fisheye coordinate system, used before in numerical relativity codes for evolving binary black holes. The gridding scheme is constructed as a mapping between a uniform coordinate system—in which the equations of motion are solved—to the distorted system representing the spatial locations of our grid points. Since we are motivated to eventually use this system for circumbinary disc calculations, we demonstrate how the distorted system can be constructed to asymptote to the typical spherical polar coordinate system, amenable to efficiently simulating orbiting gas flows about central objects with little numerical diffusion. We discuss its implementation in the Harm3d code, tailored to evolve the magnetohydrodynamics equations in curved spacetimes. We evaluate the performance of the system’s implementation in Harm3d with a series of tests, such as the advected magnetic field loop test, magnetized Bondi accretion, and evolutions of hydrodynamic discs about a single black hole and about a binary black hole. Like we have done with Harm3d, this gridding scheme can be implemented in other unigrid codes as a (possibly) simpler alternative to adaptive mesh refinement.

  17. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  18. Complex Volume Grid Generation Through the Use of Grid Reusability

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    This paper presents a set of surface and volume grid generation techniques which reuse existing surface and volume grids. These methods use combinations of data manipulations to reduce grid generation time, improve grid characteristics, and increase the capabilities of existing domain discretization software. The manipulation techniques utilize physical and computational domains to produce basis function on which to operate and modify grid character and smooth grids using Trans-Finite Interpolation, a vector interpolation method and parametric re-mapping technique. With these new techniques, inviscid grids can be converted to viscous grids, multiple zone grid adaption can be performed to improve CFD solver efficiency, and topological changes to improve modeling of flow fields can be done simply and quickly. Examples of these capabilities are illustrated as applied to various configurations.

  19. Inferring the Frequency Spectrum of Derived Variants to Quantify Adaptive Molecular Evolution in Protein-Coding Genes of Drosophila melanogaster.

    PubMed

    Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian

    2016-06-01

    Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912

  20. PARAMESH V4.1: Parallel Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; de Fainchtein, Rosalinda; Packer, Charles

    2011-06-01

    PARAMESH is a package of Fortran 90 subroutines designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain, with spatial resolution varying to satisfy the demands of the application. These sub-grid blocks form the nodes of a tree data-structure (quad-tree in 2D or oct-tree in 3D). Each grid block has a logically cartesian mesh. The package supports 1, 2 and 3D models. PARAMESH is released under the NASA-wide Open-Source software license.

  1. An Approach for Dynamic Grids

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Liou, Meng-Sing; Hindman, Richard G.

    1994-01-01

    An approach is presented for the generation of two-dimensional, structured, dynamic grids. The grid motion may be due to the motion of the boundaries of the computational domain or to the adaptation of the grid to the transient, physical solution. A time-dependent grid is computed through the time integration of the grid speeds which are computed from a system of grid speed equations. The grid speed equations are derived from the time-differentiation of the grid equations so as to ensure that the dynamic grid maintains the desired qualities of the static grid. The grid equations are the Euler-Lagrange equations derived from a variational statement for the grid. The dynamic grid method is demonstrated for a model problem involving boundary motion, an inviscid flow in a converging-diverging nozzle during startup, and a viscous flow over a flat plate with an impinging shock wave. It is shown that the approach is more accurate for transient flows than an approach in which the grid speeds are computed using a finite difference with respect to time of the grid. However, the approach requires significantly more computational effort.

  2. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  3. TIGER: Turbomachinery interactive grid generation

    NASA Technical Reports Server (NTRS)

    Soni, Bharat K.; Shih, Ming-Hsin; Janus, J. Mark

    1992-01-01

    A three dimensional, interactive grid generation code, TIGER, is being developed for analysis of flows around ducted or unducted propellers. TIGER is a customized grid generator that combines new technology with methods from general grid generation codes. The code generates multiple block, structured grids around multiple blade rows with a hub and shroud for either C grid or H grid topologies. The code is intended for use with a Euler/Navier-Stokes solver also being developed, but is general enough for use with other flow solvers. TIGER features a silicon graphics interactive graphics environment that displays a pop-up window, graphics window, and text window. The geometry is read as a discrete set of points with options for several industrial standard formats and NASA standard formats. Various splines are available for defining the surface geometries. Grid generation is done either interactively or through a batch mode operation using history files from a previously generated grid. The batch mode operation can be done either with a graphical display of the interactive session or with no graphics so that the code can be run on another computer system. Run time can be significantly reduced by running on a Cray-YMP.

  4. TIGER: A user-friendly interactive grid generation system for complicated turbomachinery and axis-symmetric configurations

    NASA Technical Reports Server (NTRS)

    Shih, Ming H.; Soni, Bharat K.

    1993-01-01

    The issue of time efficiency in grid generation is addressed by developing a user friendly graphical interface for interactive/automatic construction of structured grids around complex turbomachinery/axis-symmetric configurations. The accuracy of geometry modeling and its fidelity is accomplished by adapting the nonuniform rational b-spline (NURBS) representation. A customized interactive grid generation code, TIGER, has been developed to facilitate the grid generation process for complicated internal, external, and internal-external turbomachinery fields simulations. The FORMS Library is utilized to build user-friendly graphical interface. The algorithm allows a user to redistribute grid points interactively on curves/surfaces using NURBS formulation with accurate geometric definition. TIGER's features include multiblock, multiduct/shroud, multiblade row, uneven blade count, and patched/overlapping block interfaces. It has been applied to generate grids for various complicated turbomachinery geometries, as well as rocket and missile configurations.

  5. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  6. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    PubMed

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. PMID:25481821

  7. Grid generation research at OSU

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1992-01-01

    In the last two years, effort was concentrated on: (1) surface modeling; (2) surface grid generation; and (3) 3-D flow space grid generation. The surface modeling shares the same objectives as the surface modeling in computer aided design (CAD), so software available in CAD can in principle be used for solid modeling. Unfortunately, however, the CAD software cannot be easily used in practice for grid generation purposes, because they are not designed to provide appropriate data base for grid generation. Therefore, we started developing a generalized surface modeling software from scratch, that provides the data base for the surface grid generation. Generating surface grid is an important step in generating a 3-D space for flow space. To generate a surface grid on a given surface representation, we developed a unique algorithm that works on any non-smooth surfaces. Once the surface grid is generated, a 3-D space can be generated. For this purpose, we also developed a new algorithm, which is a hybrid of the hyperbolic and the elliptic grid generation methods. With this hybrid method, orthogonality of the grid near the solid boundary can be easily achieved without introducing empirical fudge factors. Work to develop 2-D and 3-D grids for turbomachinery blade geometries was performed, and as an extension of this research we are planning to develop an adaptive grid procedure with an interactive grid environment.

  8. AZEuS: AN ADAPTIVE ZONE EULERIAN SCHEME FOR COMPUTATIONAL MAGNETOHYDRODYNAMICS

    SciTech Connect

    Ramsey, Jon P.; Clarke, David A.; Men'shchikov, Alexander B.

    2012-03-01

    A new adaptive mesh refinement (AMR) version of the ZEUS-3D astrophysical magnetohydrodynamical fluid code, AZEuS, is described. The AMR module in AZEuS has been completely adapted to the staggered mesh that characterizes the ZEUS family of codes on which scalar quantities are zone-centered and vector components are face-centered. In addition, for applications using static grids, it is necessary to use higher-order interpolations for prolongation to minimize the errors caused by waves crossing from a grid of one resolution to another. Finally, solutions to test problems in one, two, and three dimensions in both Cartesian and spherical coordinates are presented.

  9. Breach, Leach, and Transport-Multiple Species GRID

    2006-04-01

    BLTMS-GRID is a FORTRAN code developed to facilitate specifications of a finite-element grid for the Nuclear Regulatory Commission code called Breach, Leach, and Transport - Multiple Species (BLT-MS). BLTMS-GRID is an open-source code. It functions under a DOS window.

  10. AN ADAPTIVE PARTICLE-MESH GRAVITY SOLVER FOR ENZO

    SciTech Connect

    Passy, Jean-Claude; Bryan, Greg L.

    2014-11-01

    We describe and implement an adaptive particle-mesh algorithm to solve the Poisson equation for grid-based hydrodynamics codes with nested grids. The algorithm is implemented and extensively tested within the astrophysical code Enzo against the multigrid solver available by default. We find that while both algorithms show similar accuracy for smooth mass distributions, the adaptive particle-mesh algorithm is more accurate for the case of point masses, and is generally less noisy. We also demonstrate that the two-body problem can be solved accurately in a configuration with nested grids. In addition, we discuss the effect of subcycling, and demonstrate that evolving all the levels with the same timestep yields even greater precision.

  11. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  12. Parallel Power Grid Simulation Toolkit

    SciTech Connect

    Smith, Steve; Kelley, Brian; Banks, Lawrence; Top, Philip; Woodward, Carol

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  13. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  14. Comparison of DAC and MONACO DSMC Codes with Flat Plate Simulation

    NASA Technical Reports Server (NTRS)

    Padilla, Jose F.

    2010-01-01

    Various implementations of the direct simulation Monte Carlo (DSMC) method exist in academia, government and industry. By comparing implementations, deficiencies and merits of each can be discovered. This document reports comparisons between DSMC Analysis Code (DAC) and MONACO. DAC is NASA's standard DSMC production code and MONACO is a research DSMC code developed in academia. These codes have various differences; in particular, they employ distinct computational grid definitions. In this study, DAC and MONACO are compared by having each simulate a blunted flat plate wind tunnel test, using an identical volume mesh. Simulation expense and DSMC metrics are compared. In addition, flow results are compared with available laboratory data. Overall, this study revealed that both codes, excluding grid adaptation, performed similarly. For parallel processing, DAC was generally more efficient. As expected, code accuracy was mainly dependent on physical models employed.

  15. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    SciTech Connect

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.

  16. Current Grid operation and future role of the Grid

    NASA Astrophysics Data System (ADS)

    Smirnova, O.

    2012-12-01

    Grid-like technologies and approaches became an integral part of HEP experiments. Some other scientific communities also use similar technologies for data-intensive computations. The distinct feature of Grid computing is the ability to federate heterogeneous resources of different ownership into a seamless infrastructure, accessible via a single log-on. Like other infrastructures of similar nature, Grid functioning requires not only technologically sound basis, but also reliable operation procedures, monitoring and accounting. The two aspects, technological and operational, are closely related: weaker is the technology, more burden is on operations, and other way around. As of today, Grid technologies are still evolving: at CERN alone, every LHC experiment uses an own Grid-like system. This inevitably creates a heavy load on operations. Infrastructure maintenance, monitoring and incident response are done on several levels, from local system administrators to large international organisations, involving massive human effort worldwide. The necessity to commit substantial resources is one of the obstacles faced by smaller research communities when moving computing to the Grid. Moreover, most current Grid solutions were developed under significant influence of HEP use cases, and thus need additional effort to adapt them to other applications. Reluctance of many non-HEP researchers to use Grid negatively affects the outlook for national Grid organisations, which strive to provide multi-science services. We started from the situation where Grid organisations were fused with HEP laboratories and national HEP research programmes; we hope to move towards the world where Grid will ultimately reach the status of generic public computing and storage service provider and permanent national and international Grid infrastructures will be established. How far will we be able to advance along this path, depends on us. If no standardisation and convergence efforts will take place

  17. Do you really represent my task? Sequential adaptation effects to unexpected events support referential coding for the joint Simon effect.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2016-07-01

    Recent findings suggest that a Simon effect (SE) can be induced in Individual go/nogo tasks when responding next to an event-producing object salient enough to provide a reference for the spatial coding of one's own action. However, there is skepticism against referential coding for the joint Simon effect (JSE) by proponents of task co-representation. In the present study, we tested assumptions of task co-representation and referential coding by introducing unexpected double response events in a joint go/nogo and a joint independent go/nogo task. In Experiment 1b, we tested if task representations are functionally similar in joint and standard Simon tasks. In Experiment 2, we tested sequential updating of task co-representation after unexpected single response events in the joint independent go/nogo task. Results showed increased JSEs following unexpected events in the joint go/nogo and joint independent go/nogo task (Experiment 1a). While the former finding is in line with the assumptions made by both accounts (task co-representation and referential coding), the latter finding supports referential coding. In contrast to Experiment 1a, we found a decreased SE after unexpected events in the standard Simon task (Experiment 1b), providing evidence against the functional equivalence assumption between joint and two-choice Simon tasks of the task co-representation account. Finally, we found an increased JSE also following unexpected single response events (Experiment 2), ruling out that the findings of the joint independent go/nogo task in Experiment 1a were due to a re-conceptualization of the task situation. In conclusion, our findings support referential coding also for the joint Simon effect. PMID:25833374

  18. The Grid

    SciTech Connect

    White, Vicky

    2003-05-21

    By now almost everyone has heard of 'The Grid', or 'Grid Computing' as it should more properly be described. There are frequent articles in both the popular and scientific press talking about 'The Grid' or about some specific Grid project. Run II Experiments, US-CMS, BTeV, the Sloane Digital Sky Survey and the Lattice QCD folks are all incorporating aspects of Grid Computing in their plans, and the Fermilab Computing Division is supporting and encouraging these efforts. Why are we doing this and what does it have to do with running a physics experiment or getting scientific results? I will explore some of these questions and try to give an overview, not so much of the technical aspects of Grid Computing, rather of what the phenomenon means for our field.

  19. Progress in Grid Generation: From Chimera to DRAGON Grids

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Kao, Kai-Hsiung

    1994-01-01

    Hybrid grids, composed of structured and unstructured grids, combines the best features of both. The chimera method is a major stepstone toward a hybrid grid from which the present approach is evolved. The chimera grid composes a set of overlapped structured grids which are independently generated and body-fitted, yielding a high quality grid readily accessible for efficient solution schemes. The chimera method has been shown to be efficient to generate a grid about complex geometries and has been demonstrated to deliver accurate aerodynamic prediction of complex flows. While its geometrical flexibility is attractive, interpolation of data in the overlapped regions - which in today's practice in 3D is done in a nonconservative fashion, is not. In the present paper we propose a hybrid grid scheme that maximizes the advantages of the chimera scheme and adapts the strengths of the unstructured grid while at the same time keeps its weaknesses minimal. Like the chimera method, we first divide up the physical domain by a set of structured body-fitted grids which are separately generated and overlaid throughout a complex configuration. To eliminate any pure data manipulation which does not necessarily follow governing equations, we use non-structured grids only to directly replace the region of the arbitrarily overlapped grids. This new adaptation to the chimera thinking is coined the DRAGON grid. The nonstructured grid region sandwiched between the structured grids is limited in size, resulting in only a small increase in memory and computational effort. The DRAGON method has three important advantages: (1) preserving strengths of the chimera grid; (2) eliminating difficulties sometimes encountered in the chimera scheme, such as the orphan points and bad quality of interpolation stencils; and (3) making grid communication in a fully conservative and consistent manner insofar as the governing equations are concerned. To demonstrate its use, the governing equations are

  20. Adaptive Finite Element Methods in Geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, R.; Davies, H.; Hassan, O.; Morgan, K.; Nithiarasu, P.

    2006-12-01

    Adaptive finite element methods are presented for improving the quality of solutions to two-dimensional (2D) and three-dimensional (3D) convection dominated problems in geodynamics. The methods demonstrate the application of existing technology in the engineering community to problems within the `solid' Earth sciences. Two-Dimensional `Adaptive Remeshing': The `remeshing' strategy introduced in 2D adapts the mesh automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. The approach requires the coupling of an automatic mesh generator, a finite element flow solver and an error estimator. In this study, the procedure is implemented in conjunction with the well-known geodynamical finite element code `ConMan'. An unstructured quadrilateral mesh generator is utilised, with mesh adaptation accomplished through regeneration. This regeneration employs information provided by an interpolation based local error estimator, obtained from the computed solution on an existing mesh. The technique is validated by solving thermal and thermo-chemical problems with known benchmark solutions. In a purely thermal context, results illustrate that the method is highly successful, improving solution accuracy whilst increasing computational efficiency. For thermo-chemical simulations the same conclusions can be drawn. However, results also demonstrate that the grid based methods employed for simulating the compositional field are not competitive with the other methods (tracer particle and marker chain) currently employed in this field, even at the higher spatial resolutions allowed by the adaptive grid strategies. Three-Dimensional Adaptive Multigrid: We extend the ideas from our 2D work into the 3D realm in the context of a pre-existing 3D-spherical mantle dynamics code, `TERRA'. In its original format, `TERRA' is computationally highly efficient since it employs a multigrid solver that depends upon a grid utilizing a clever

  1. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  2. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Astrophysics Data System (ADS)

    Chen, Y. S.; Farmer, R. C.

    1992-04-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  3. LaMEM: a massively parallel 3D staggered-grid finite-difference code for coupled nonlinear themo-mechanical modeling of lithospheric deformation with visco-elasto-plastic rheology

    NASA Astrophysics Data System (ADS)

    Popov, Anton; Kaus, Boris

    2015-04-01

    This software project aims at bringing the 3D lithospheric deformation modeling to a qualitatively different level. Our code LaMEM (Lithosphere and Mantle Evolution Model) is based on the following building blocks: * Massively-parallel data-distributed implementation model based on PETSc library * Light, stable and accurate staggered-grid finite difference spatial discretization * Marker-in-Cell pedictor-corector time discretization with Runge-Kutta 4-th order * Elastic stress rotation algorithm based on the time integration of the vorticity pseudo-vector * Staircase-type internal free surface boundary condition without artificial viscosity contrast * Geodynamically relevant visco-elasto-plastic rheology * Global velocity-pressure-temperature Newton-Raphson nonlinear solver * Local nonlinear solver based on FZERO algorithm * Coupled velocity-pressure geometric multigrid preconditioner with Galerkin coarsening Staggered grid finite difference, being inherently Eulerian and rather complicated discretization method, provides no natural treatment of free surface boundary condition. The solution based on the quasi-viscous sticky-air phase introduces significant viscosity contrasts and spoils the convergence of the iterative solvers. In LaMEM we are currently implementing an approximate stair-case type of the free surface boundary condition which excludes the empty cells and restores the solver convergence. Because of the mutual dependence of the stress and strain-rate tensor components, and their different spatial locations in the grid, there is no straightforward way of implementing the nonlinear rheology. In LaMEM we have developed and implemented an efficient interpolation scheme for the second invariant of the strain-rate tensor, that solves this problem. Scalable efficient linear solvers are the key components of the successful nonlinear problem solution. In LaMEM we have a range of PETSc-based preconditioning techniques that either employ a block factorization of

  4. Fibonacci Grids

    NASA Technical Reports Server (NTRS)

    Swinbank, Richard; Purser, James

    2006-01-01

    Recent years have seen a resurgence of interest in a variety of non-standard computational grids for global numerical prediction. The motivation has been to reduce problems associated with the converging meridians and the polar singularities of conventional regular latitude-longitude grids. A further impetus has come from the adoption of massively parallel computers, for which it is necessary to distribute work equitably across the processors; this is more practicable for some non-standard grids. Desirable attributes of a grid for high-order spatial finite differencing are: (i) geometrical regularity; (ii) a homogeneous and approximately isotropic spatial resolution; (iii) a low proportion of the grid points where the numerical procedures require special customization (such as near coordinate singularities or grid edges). One family of grid arrangements which, to our knowledge, has never before been applied to numerical weather prediction, but which appears to offer several technical advantages, are what we shall refer to as "Fibonacci grids". They can be thought of as mathematically ideal generalizations of the patterns occurring naturally in the spiral arrangements of seeds and fruit found in sunflower heads and pineapples (to give two of the many botanical examples). These grids possess virtually uniform and highly isotropic resolution, with an equal area for each grid point. There are only two compact singular regions on a sphere that require customized numerics. We demonstrate the practicality of these grids in shallow water simulations, and discuss the prospects for efficiently using these frameworks in three-dimensional semi-implicit and semi-Lagrangian weather prediction or climate models.

  5. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    SciTech Connect

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  6. GridTool: A surface modeling and grid generation tool

    NASA Technical Reports Server (NTRS)

    Samareh-Abolhassani, Jamshid

    1995-01-01

    GridTool is designed around the concept that the surface grids are generated on a set of bi-linear patches. This type of grid generation is quite easy to implement, and it avoids the problems associated with complex CAD surface representations and associated surface parameterizations. However, the resulting surface grids are close to but not on the original CAD surfaces. This problem can be alleviated by projecting the resulting surface grids onto the original CAD surfaces. GridTool is designed primary for unstructured grid generation systems. Currently, GridTool supports VGRID and FELISA systems, and it can be easily extended to support other unstructured grid generation systems. The data in GridTool is stored parametrically so that once the problem is set up, one can modify the surfaces and the entire set of points, curves and patches will be updated automatically. This is very useful in a multidisciplinary design and optimization process. GridTool is written entirely in ANSI 'C', the interface is based on the FORMS library, and the graphics is based on the GL library. The code has been tested successfully on IRIS workstations running IRIX4.0 and above. The memory is allocated dynamically, therefore, memory size will depend on the complexity of geometry/grid. GridTool data structure is based on a link-list structure which allows the required memory to expand and contract dynamically according to the user's data size and action. Data structure contains several types of objects such as points, curves, patches, sources and surfaces. At any given time, there is always an active object which is drawn in magenta, or in their highlighted colors as defined by the resource file which will be discussed later.

  7. X3D moving grid methods for semiconductor applications

    SciTech Connect

    Kuprat, A.; Cartwright, D.; Gammel, J.T.; George, D.; Kendrick, B.; Kilcrease, D.; Trease, H.; Walker, R.

    1997-11-01

    The Los Alamos 3D grid toolbox handles grid maintenance chores and provides access to a sophisticated set of optimization algorithms for unstructured grids. The application of these tools to semiconductor problems is illustrated in three examples: grain growth, topographic deposition and electrostatics. These examples demonstrate adaptive smoothing, front tracking, and automatic, adaptive refinement/derefinement.

  8. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. User's manual

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    The TranAir computer program calculates transonic flow about arbitrary configurations at subsonic, transonic, and supersonic freestream Mach numbers. TranAir solves the nonlinear full potential equations subject to a variety of boundary conditions modeling wakes, inlets, exhausts, porous walls, and impermeable surfaces. Regions with different total temperature and pressure can be represented. The user's manual describes how to run the TranAir program and its graphical support programs.

  9. CFD code evaluation for internal flow modeling

    NASA Technical Reports Server (NTRS)

    Chung, T. J.

    1990-01-01

    Research on the computational fluid dynamics (CFD) code evaluation with emphasis on supercomputing in reacting flows is discussed. Advantages of unstructured grids, multigrids, adaptive methods, improved flow solvers, vector processing, parallel processing, and reduction of memory requirements are discussed. As examples, researchers include applications of supercomputing to reacting flow Navier-Stokes equations including shock waves and turbulence and combustion instability problems associated with solid and liquid propellants. Evaluation of codes developed by other organizations are not included. Instead, the basic criteria for accuracy and efficiency have been established, and some applications on rocket combustion have been made. Research toward an ultimate goal, the most accurate and efficient CFD code, is in progress and will continue for years to come.

  10. caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research.

    PubMed

    Crowley, Rebecca S; Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael

    2010-01-01

    The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)--an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs. PMID:20442142

  11. caTIES: a grid based system for coding and retrieval of surgical pathology reports and tissue specimens in support of translational research

    PubMed Central

    Castine, Melissa; Mitchell, Kevin; Chavan, Girish; McSherry, Tara; Feldman, Michael

    2010-01-01

    The authors report on the development of the Cancer Tissue Information Extraction System (caTIES)—an application that supports collaborative tissue banking and text mining by leveraging existing natural language processing methods and algorithms, grid communication and security frameworks, and query visualization methods. The system fills an important need for text-derived clinical data in translational research such as tissue-banking and clinical trials. The design of caTIES addresses three critical issues for informatics support of translational research: (1) federation of research data sources derived from clinical systems; (2) expressive graphical interfaces for concept-based text mining; and (3) regulatory and security model for supporting multi-center collaborative research. Implementation of the system at several Cancer Centers across the country is creating a potential network of caTIES repositories that could provide millions of de-identified clinical reports to users. The system provides an end-to-end application of medical natural language processing to support multi-institutional translational research programs. PMID:20442142

  12. NAS Grid Benchmarks: A Tool for Grid Space Exploration

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; VanderWijngaart, Rob F.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    We present an approach for benchmarking services provided by computational Grids. It is based on the NAS Parallel Benchmarks (NPB) and is called NAS Grid Benchmark (NGB) in this paper. We present NGB as a data flow graph encapsulating an instance of an NPB code in each graph node, which communicates with other nodes by sending/receiving initialization data. These nodes may be mapped to the same or different Grid machines. Like NPB, NGB will specify several different classes (problem sizes). NGB also specifies the generic Grid services sufficient for running the bench-mark. The implementor has the freedom to choose any specific Grid environment. However, we describe a reference implementation in Java, and present some scenarios for using NGB.

  13. Grid oscillators

    NASA Technical Reports Server (NTRS)

    Popovic, Zorana B.; Kim, Moonil; Rutledge, David B.

    1988-01-01

    Loading a two-dimensional grid with active devices offers a means of combining the power of solid-state oscillators in the microwave and millimeter-wave range. The grid structure allows a large number of negative resistance devices to be combined. This approach is attractive because the active devices do not require an external locking signal, and the combining is done in free space. In addition, the loaded grid is a planar structure amenable to monolithic integration. Measurements on a 25-MESFET grid at 9.7 GHz show power-combining and frequency-locking without an external locking signal, with an ERP of 37 W. Experimental far-field patterns agree with theoretical results obtained using reciprocity.

  14. Grid Computing

    NASA Astrophysics Data System (ADS)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  15. DNS of vibrating grid turbulence

    NASA Astrophysics Data System (ADS)

    Khujadze, G.; Oberlack, M.

    Direct numerical simulation of the turbulence generated at a grid vibrating normally to itself using spectral code [1] is presented. Due to zero mean shear there is no production of turbulence apart from the grid. Action of the grid is mimiced by the function implemented in the middle of the simulation box:f_i (x_1 ,x_2 ) = {n^2 S}/2left\\{ {left| {{δ _{i3} }/4\\cos left( {{2π }/Mx_1 } right)\\cos left. {left( {{2π }/Mx_2 } right)} right|} right.sin (nt) + {β _i }/4} right\\}, where M is the mesh size, S/2 - amplitude or stroke of the grid, n - frequency. β i are random numbers with uniform distribution. The simulations were performed for the following parameters: x 1, x 2 ∈ [-π; π], x 3 ∈ [-2π; 2π]; Re = nS 2/? = 1000; S/M = 2; Numerical grid: 128 × 128 × 256.

  16. DNS of vibrating grid turbulence

    NASA Astrophysics Data System (ADS)

    Khujadze, G.; Oberlack, M.

    Direct numerical simulation of the turbulence generated at a grid vibrating normally to itself using spectral code [1] is presented. Due to zero mean shear there is no production of turbulence apart from the grid. Action of the grid is mimiced by the function implemented in the middle of the simulation box:f_i (x_1 ,x_2 ) = {n^2 S}/2left{ {left| {{δ _{i3} }/4\\cos left( {{2π }/Mx_1 } right)\\cos left. {left( {{2π }/Mx_2 } right)} right|} right.sin (nt) + {β _i }/4} right}, where M is the mesh size, S/2 - amplitude or stroke of the grid, n - frequency. β i are random numbers with uniform distribution. The simulations were performed for the following parameters: x 1, x 2 ∈ [-π; π], x 3 ∈ [-2π; 2π]; Re = nS 2/? = 1000; S/M = 2; Numerical grid: 128 × 128 × 256.

  17. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  18. Adaptive coding of the value of social cues with oxytocin, an fMRI study in autism spectrum disorder.

    PubMed

    Andari, Elissar; Richard, Nathalie; Leboyer, Marion; Sirigu, Angela

    2016-03-01

    The neuropeptide oxytocin (OT) is one of the major targets of research in neuroscience, with respect to social functioning. Oxytocin promotes social skills and improves the quality of face processing in individuals with social dysfunctions such as autism spectrum disorder (ASD). Although one of OT's key functions is to promote social behavior during dynamic social interactions, the neural correlates of this function remain unknown. Here, we combined acute intranasal OT (IN-OT) administration (24 IU) and fMRI with an interactive ball game and a face-matching task in individuals with ASD (N = 20). We found that IN-OT selectively enhanced the brain activity of early visual areas in response to faces as compared to non-social stimuli. OT inhalation modulated the BOLD activity of amygdala and hippocampus in a context-dependent manner. Interestingly, IN-OT intake enhanced the activity of mid-orbitofrontal cortex in response to a fair partner, and insula region in response to an unfair partner. These OT-induced neural responses were accompanied by behavioral improvements in terms of allocating appropriate feelings of trust toward different partners' profiles. Our findings suggest that OT impacts the brain activity of key areas implicated in attention and emotion regulation in an adaptive manner, based on the value of social cues. PMID:26872344

  19. CFD Process Automation Using Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; George, Michael W. (Technical Monitor)

    1995-01-01

    This talk summarizes three applications of the overset grid method for CFD using some level of automated grid generation, flow solution and post-processing. These applications are 2D high-lift airfoil analysis (INS2D code), turbomachinery applications (ROTOR2/3 codes), and subsonic transport wing/body configurations (OVERFLOW code). These examples provide a forum for discussing the advantages and disadvantages of overset gridding for use in an automated CFD process. The goals and benefits of the automation incorporated in each application will be described, as well as the shortcomings of the approaches.

  20. Military adaptation of commercial items: laboratory evaluation of the Code E-436 engine. Technical report 17 January-28 July 1983

    SciTech Connect

    Rimpela, R.J.G.

    1984-02-01

    The engine was installed in a dynamometer test cell at US Army Tank-Automotive Command (TACOM) and conventional dynamometer testing procedures were used to determine basic engine characteristics. The characteristics determined were full load performance, fuel economy at full load and part load, engine oil consumption, and engine heat rejection. During pre-endurance testing, the Code E-436 engine produced 378 observed kW (506.4 BHP) at full load, at rated speed of 2,600 RPM. The maximum torque during full load operation was 1439 Nm (1061 1b-ft) at 2,400 RPM. Minimum brake specific fuel consumption at full load occurred at 2,200 RPM and was 217 g/KWH (0.356 1b/BHP-HR). After the NATO Endurance Test the engine produced 375.1 observed kW (503.0 BHP) at full load and rated speed. The maximum torque was 1423.8 Nm (1050 1b-ft) at 2400 RPM. The total lube oil consumption during the 400-hour NATO endurance was 19.7 kgs (43.4 lbs). Following the endurance test visual and dimensional inspection of the engine revealed all major engine parts to be in excellent condition except for pistons. Five out of eight pistons developed cracks in the pin bores. Though the engine completed the endurance test (400 hours) and was operated for a total of 582 hours, the engine is considered as having failed the 400-hour NATO test due to piston failure.

  1. Computer code for the calculation of the temperature distribution of cooled turbine blades

    NASA Astrophysics Data System (ADS)

    Tietz, Thomas A.; Koschel, Wolfgang W.

    A generalized computer code for the calculation of the temperature distribution in a cooled turbine blade is presented. Using an iterative procedure, this program especially allows the coupling of the aerothermodynamic values of the internal flow with the corresponding temperature distribution of the blade material. The temperature distribution of the turbine blade is calculated using a fully three-dimensional finite element computer code, so that the radial heat flux is taken into account. This code was extended to 4-node tetrahedral elements enabling an adaptive grid generation. To facilitate the mesh generation of the usually complex blade geometries, a computer program was developed, which performs the grid generation of blades having basically arbitrary shape on the basis of two-dimensional cuts. The performance of the code is demonstrated with reference to a typical cooling configuration of a modern turbine blade.

  2. GridMol: a grid application for molecular modeling and visualization.

    PubMed

    Sun, Yanhua; Shen, Bin; Lu, Zhonghua; Jin, Zhong; Chi, Xuebin

    2008-02-01

    In this paper we present GridMol, an extensible tool for building a high performance computational chemistry platform in the grid environment. GridMol provides computational chemists one-stop service for molecular modeling, scientific computing and molecular information visualization. GridMol is not only a visualization and modeling tool but also simplifies control of remote Grid software that can access high performance computing resources. GridMol has been successfully integrated into China National Grid, the most powerful Chinese Grid Computing platform. In Section "Grid computing" of this paper, a computing example is given to show the availability and efficiency of GridMol. GridMol is coded using Java and Java3D for portability and cross-platform compatibility (Windows, Linux, MacOS X and UNIX). GridMol can run not only as a stand-alone application, but also as an applet through web browsers. In this paper, we will present the techniques for molecular visualization, molecular modeling and grid computing. GridMol is available free of charge under the GNU Public License (GPL) from our website: http://www.sccas.cn/~syh/GridMol/index.html. PMID:18231861

  3. User Manual for Beta Version of TURBO-GRD: A Software System for Interactive Two-Dimensional Boundary/ Field Grid Generation, Modification, and Refinement

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Slater, John W.; Henderson, Todd L.; Bidwell, Colin S.; Braun, Donald C.; Chung, Joongkee

    1998-01-01

    TURBO-GRD is a software system for interactive two-dimensional boundary/field grid generation. modification, and refinement. Its features allow users to explicitly control grid quality locally and globally. The grid control can be achieved interactively by using control points that the user picks and moves on the workstation monitor or by direct stretching and refining. The techniques used in the code are the control point form of algebraic grid generation, a damped cubic spline for edge meshing and parametric mapping between physical and computational domains. It also performs elliptic grid smoothing and free-form boundary control for boundary geometry manipulation. Internal block boundaries are constructed and shaped by using Bezier curve. Because TURBO-GRD is a highly interactive code, users can read in an initial solution, display its solution contour in the background of the grid and control net, and exercise grid modification using the solution contour as a guide. This process can be called an interactive solution-adaptive grid generation.

  4. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  5. CDF software distribution on the grid using Parrot

    SciTech Connect

    Compostella, G.; Pagan Griso, S.; Lucchesi, D.; Sfiligoi, I.; Thain, D.; /Notre Dame U.

    2010-01-01

    Large international collaborations that use decentralized computing models are becoming a custom rather than an exception in High Energy Physics. A good computing model for such big collaborations has to deal with the distribution of the experiment-specific software around the world. When the CDF experiment developed its software infrastructure, most computing was done on dedicated clusters. As a result, libraries, configuration files and large executables were deployed over a shared file system. In order to adapt its computing model to the Grid, CDF decided to distribute its software to all European Grid sites using Parrot, a user-level application capable of attaching existing programs to remote I/O systems through the filesystem interface. This choice allows CDF to use just one centralized source of code and a scalable set of caches all around Europe to efficiently distribute its code and requires almost no interaction with the existing Grid middleware or with local system administrators. This system has been in production at CDF in Europe since almost two years. Here, we present CDF implementation of Parrot and some comments on its performances.

  6. CDF software distribution on the Grid using Parrot

    NASA Astrophysics Data System (ADS)

    Compostella, G.; Pagan Griso, S.; Lucchesi, D.; Sfiligoi, I.; Thain, D.

    2010-04-01

    Large international collaborations that use decentralized computing models are becoming a custom rather than an exception in High Energy Physics. A good computing model for such big collaborations has to deal with the distribution of the experiment-specific software around the world. When the CDF experiment developed its software infrastructure, most computing was done on dedicated clusters. As a result, libraries, configuration files and large executables were deployed over a shared file system. In order to adapt its computing model to the Grid, CDF decided to distribute its software to all European Grid sites using Parrot, a user-level application capable of attaching existing programs to remote I/O systems through the filesystem interface. This choice allows CDF to use just one centralized source of code and a scalable set of caches all around Europe to efficiently distribute its code and requires almost no interaction with the existing Grid middleware or with local system administrators. This system has been in production at CDF in Europe since almost two years. Here, we present CDF implementation of Parrot and some comments on its performances.

  7. Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    1997-01-01

    The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.

  8. Automatic Data Distribution for CFD Applications on Structured Grids

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1999-01-01

    Development of HPF versions of NPB and ARC3D showed that HPF has potential to be a high level language for parallelization of CFD applications. The use of HPF requires an intimate knowledge of the applications and a detailed analysis of data affinity, data movement and data granularity. Since HPF hides data movement from the user even with this knowledge it is easy to overlook pieces of the code causing low performance of the application. In order to simplify and accelerate the task of developing HPF versions of existing CFD applications we have designed and partially implemented ADAPT (Automatic Data Distribution and Placement Tool). The ADAPT analyzes a CFD application working on a single structured grid and generates HPF TEMPLATE, (RE)DISTRIBUTION, ALIGNMENT and INDEPENDENT directives. The directives can be generated on the nest level, subroutine level, application level or inter application level. ADAPT is designed to annotate existing CFD FORTRAN application performing computations on single or multiple grids. On each grid the application can considered as a sequence of operators each applied to a set of variables defined in a particular grid domain. The operators can be classified as implicit, having data dependences, and explicit, without data dependences. In order to parallelize an explicit operator it is sufficient to create a template for the domain of the operator, align arrays used in the operator with the template, distribute the template, and declare the loops over the distributed dimensions as INDEPENDENT. In order to parallelize an implicit operator, the distribution of the operator's domain should be consistent with the operator's dependences. Any dependence between sections distributed on different processors would preclude parallelization if compiler does not have an ability to pipeline computations. If a data distribution is "orthogonal" to the dependences of an implicit operator then the loop which implements the operator can be declared as

  9. Low-mass Galaxy Formation in Cosmological Adaptive Mesh Refinement Simulations: The Effects of Varying the Sub-grid Physics Parameters

    NASA Astrophysics Data System (ADS)

    Colín, Pedro; Avila-Reese, Vladimir; Vázquez-Semadeni, Enrique; Valenzuela, Octavio; Ceverino, Daniel

    2010-04-01

    We present numerical simulations aimed at exploring the effects of varying the sub-grid physics parameters on the evolution and the properties of the galaxy formed in a low-mass dark matter halo (~7 × 1010 h -1 M sun at redshift z = 0). The simulations are run within a cosmological setting with a nominal resolution of 218 pc comoving and are stopped at z = 0.43. For simulations that cannot resolve individual molecular clouds, we propose the criterion that the threshold density for star formation, n SF, should be chosen such that the column density of the star-forming cells equals the threshold value for molecule formation, N ~ 1021 cm-2, or ~8 M sun pc-2. In all of our simulations, an extended old/intermediate-age stellar halo and a more compact younger stellar disk are formed, and in most cases, the halo's specific angular momentum is slightly larger than that of the galaxy, and sensitive to the SF/feedback parameters. We found that a non-negligible fraction of the halo stars are formed in situ in a spheroidal distribution. Changes in the sub-grid physics parameters affect significantly and in a complex way the evolution and properties of the galaxy: (1) lower threshold densities n SF produce larger stellar effective radii Re , less peaked circular velocity curves Vc (R), and greater amounts of low-density and hot gas in the disk mid-plane; (2) when stellar feedback is modeled by temporarily switching off radiative cooling in the star-forming regions, Re increases (by a factor of ~2 in our particular model), the circular velocity curve becomes flatter, and a complex multi-phase gaseous disk structure develops; (3) a more efficient local conversion of gas mass to stars, measured by a stellar particle mass distribution biased toward larger values, increases the strength of the feedback energy injection—driving outflows and inducing burstier SF histories; (4) if feedback is too strong, gas loss by galactic outflows—which are easier to produce in low

  10. Curvilinear grids for sinuous river channels

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; Waldrop, W. R.; Smith, S. R.

    1980-01-01

    In order to effectively analyze the flow in sinuous river channels, a curvilinear grid system was developed for use in the appropriate hydrodynamic code. The CENTERLINE program was designed to generate a two dimensional grid for this purpose. The Cartesian coordinates of a series of points along the boundaries of the sinuous channel represent the primary input to CENTERLINE. The program calculates the location of the river centerline, the distance downstream along the centerline, and both radius of curvature and channel width as a function of such distance downstream. These parameters form the basis for the generation of the curvilinear grid. Based on input values for longitudinal and lateral grid spacing, the corresponding grid system is generated and a file is created containing the appropriate parameters for use in the associated explicit finite difference hydrodynamic programs. Because of the option for a nonuniform grid, grid spacing can be concentrated in areas containing the largest flow gradients.

  11. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial 'many-boxes-in-many-boxes' mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  12. Solving Partial Differential Equations on Overlapping Grids

    SciTech Connect

    Henshaw, W D

    2008-09-22

    We discuss the solution of partial differential equations (PDEs) on overlapping grids. This is a powerful technique for efficiently solving problems in complex, possibly moving, geometry. An overlapping grid consists of a set of structured grids that overlap and cover the computational domain. By allowing the grids to overlap, grids for complex geometries can be more easily constructed. The overlapping grid approach can also be used to remove coordinate singularities by, for example, covering a sphere with two or more patches. We describe the application of the overlapping grid approach to a variety of different problems. These include the solution of incompressible fluid flows with moving and deforming geometry, the solution of high-speed compressible reactive flow with rigid bodies using adaptive mesh refinement (AMR), and the solution of the time-domain Maxwell's equations of electromagnetism.

  13. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  14. Mass conservation of the unified continuous and discontinuous element-based Galerkin methods on dynamically adaptive grids with application to atmospheric simulations

    NASA Astrophysics Data System (ADS)

    Kopera, Michal A.; Giraldo, Francis X.

    2015-09-01

    We perform a comparison of mass conservation properties of the continuous (CG) and discontinuous (DG) Galerkin methods on non-conforming, dynamically adaptive meshes for two atmospheric test cases. The two methods are implemented in a unified way which allows for a direct comparison of the non-conforming edge treatment. We outline the implementation details of the non-conforming direct stiffness summation algorithm for the CG method and show that the mass conservation error is similar to the DG method. Both methods conserve to machine precision, regardless of the presence of the non-conforming edges. For lower order polynomials the CG method requires additional stabilization to run for very long simulation times. We addressed this issue by using filters and/or additional artificial viscosity. The mathematical proof of mass conservation for CG with non-conforming meshes is presented in Appendix B.

  15. Multiblock grid generation with automatic zoning

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1995-01-01

    An overview will be given for multiblock grid generation with automatic zoning. We shall explore the many advantages and benefits of this exciting technology and will also see how to apply it to a number of interesting cases. The technology is available in the form of a commercial code, GridPro(registered trademark)/az3000. This code takes surface geometry definitions and patterns of points as its primary input and produces high quality grids as its output. Before we embark upon our exploration, we shall first give a brief background of the environment in which this technology fits.

  16. Introduction to grid generation systems in turbomachinery

    NASA Astrophysics Data System (ADS)

    Camarero, R.; Ozell, B.; Reggio, M.; Caron, A.

    Body-fitted curvilinear grid generation for the numerical simulation of three dimensional flow in turbomachines is introduced. The grids yield coordinate curves aligned with the domain boundaries. The numerical scheme for the governing equations is carried out on a rectangular mesh, giving a simpler and more accurate algorithm since bondaries coincide with coordinate grids, and no interpolation is required. The geometric complexity, through the transformation, is imbedded into the coefficients of the governing equations, affording the possibility of writing generalized codes applicable to a variety of geometries. This results in a great saving in the code development effort.

  17. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  18. Interactive grid generation program for CAP-TSD

    NASA Technical Reports Server (NTRS)

    Bland, Samuel R.

    1990-01-01

    A grid generation program for use with the CAP-TSD transonic small disturbance code is described. The program runs interactively in FORTRAN on the Sun Workstation. A fifth-degree polynomial is used to map the grid index onto the computational coordinate. The grid is plotted to aid in the assessment of its quality and may be saved on file in NAMELIST format.

  19. Criteria for evaluation of grid generation systems

    NASA Technical Reports Server (NTRS)

    Ascoli, Edward P.; Barson, Steven L.; Decroix, Michele E.; Hsu, Wayne W.

    1993-01-01

    Many CFD grid generation systems are in use nationally, but few comparative studies have been performed to quantify their relative merits. A study was undertaken to systematically evaluate and select the best CFD grid generation codes available. Detailed evaluation criteria were established as the basis for the evaluation conducted. Descriptions of thirty-four separate criteria, grouped into eight general categories are provided. Benchmark test cases, developed to test basic features of selected codes, are described in detail. Scoring guidelines were generated to establish standards for measuring code capabilities, ensuring uniformity of ratings, and minimizing personal bias among the three code evaluators. Ten candidate codes were identified from government, industry, universities, and commercial software companies. A three phase evaluation was conducted. In Phase 1, ten codes identified were screened through conversations with code authors and other industry experts. Seven codes were carried forward into a Phase 2 evaluation in which all codes were scored according to the predefined criteria. Two codes emerged as being significantly better than the others: RAGGS and GRIDGEN. Finally, these two codes were carried forward into a Phase 3 evaluation in which complex 3-D multizone grids were generated to verify capability.

  20. A point implicit unstructured grid solver for the Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Thareja, Rajiv R.; Stewart, James R.; Hassan, Obey; Morgan, Ken; Peraire, Jaime

    1988-01-01

    An upwind finite element technique that uses cell centered quantities and implicit and/or explicit time marching has been developed for computing hypersonic laminar viscous flows using adaptive unstructured triangular grids. A structured grid of quadrilaterals is laid out near the body surface. For inviscid flows the method is stable at Courant numbers of over 100,000. A first order basic scheme and a higher order flux corrected transport (FCT) scheme have been implemented. This technique has been applied to the problem of predicting type III and IV shock wave interactions on a cylinder, with a view of simulating the pressure and heating rate augmentation caused by an impinging shock on the leading edge of a cowl lip of an engine inlet. The predictions of wall pressure and heating rates compare very well with experimental data. The flow features are very distinctly captured with a sequence of adaptively generated grids. The adaptive mesh generator and the upwind Navier-Stokes solver are combined in a set of programs called LARCNESS, an acronym for Langley Adaptive Remeshing Code and Navier-Stokes Solver.

  1. Parallel grid library for rapid and flexible simulation development

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2013-04-01

    We present an easy to use and flexible grid library for developing highly scalable parallel simulations. The distributed cartesian cell-refinable grid (dccrg) supports adaptive mesh refinement and allows an arbitrary C++ class to be used as cell data. The amount of data in grid cells can vary both in space and time allowing dccrg to be used in very different types of simulations, for example in fluid and particle codes. Dccrg transfers the data between neighboring cells on different processes transparently and asynchronously allowing one to overlap computation and communication. This enables excellent scalability at least up to 32 k cores in magnetohydrodynamic tests depending on the problem and hardware. In the version of dccrg presented here part of the mesh metadata is replicated between MPI processes reducing the scalability of adaptive mesh refinement (AMR) to between 200 and 600 processes. Dccrg is free software that anyone can use, study and modify and is available at https://gitorious.org/dccrg. Users are also kindly requested to cite this work when publishing results obtained with dccrg. Catalogue identifier: AEOM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOM_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License version 3 No. of lines in distributed program, including test data, etc.: 54975 No. of bytes in distributed program, including test data, etc.: 974015 Distribution format: tar.gz Programming language: C++. Computer: PC, cluster, supercomputer. Operating system: POSIX. The code has been parallelized using MPI and tested with 1-32768 processes RAM: 10 MB-10 GB per process Classification: 4.12, 4.14, 6.5, 19.3, 19.10, 20. External routines: MPI-2 [1], boost [2], Zoltan [3], sfc++ [4] Nature of problem: Grid library supporting arbitrary data in grid cells, parallel adaptive mesh refinement, transparent remote neighbor data updates and

  2. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509

  3. Scientific Computing on the Grid

    SciTech Connect

    Allen, Gabrielle; Seidel, Edward; Shalf, John

    2001-12-12

    Computer simulations are becoming increasingly important as the only means for studying and interpreting the complex processes of nature. Yet the scope and accuracy of these simulations are severely limited by available computational power, even using today's most powerful supercomputers. As we endeavor to simulate the true complexity of nature, we will require much larger scale calculations than are possible at present. Such dynamic and large scale applications will require computational grids and grids require development of new latency tolerant algorithms, and sophisticated code frameworks like Cactus to carry out more complex and high fidelity simulations with a massive degree of parallelism.

  4. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  5. Congruent gridding for developable geometries using NURBS

    SciTech Connect

    Fritts, M.; Weems, K.

    1996-12-31

    This paper discusses recent progress in developing an interactive system built upon NURBS geometry modeling to ensure congruence of surface grids and surface geometries for structured and unstructured gridders. The code system is being developed as part of a collaborative effort among Nausea/Carderock Division, NASA/Lewis, Boeing Computer Services, and SAIC/Ship Technology Division, and uses the Navy library of NURBS FORTRAN subroutines, DT-NURBS, to allow incorporation into a wide variety of gridding codes and flow solvers. Although this paper will present examples relevant to the design of ship hulls only, the code system is being developed to support the design and manufacture of complex mechanical systems.

  6. The Construction of an Ontology-Based Ubiquitous Learning Grid

    ERIC Educational Resources Information Center

    Liao, Ching-Jung; Chou, Chien-Chih; Yang, Jin-Tan David

    2009-01-01

    The purpose of this study is to incorporate adaptive ontology into ubiquitous learning grid to achieve seamless learning environment. Ubiquitous learning grid uses ubiquitous computing environment to infer and determine the most adaptive learning contents and procedures in anytime, any place and with any device. To achieve the goal, an…

  7. Compressible Astrophysics Simulation Code

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  8. ARPA-E: Advancing the Electric Grid

    ScienceCinema

    Lemmon, John; Ruiz, Pablo; Sommerer, Tim; Aziz, Michael

    2014-03-13

    The electric grid was designed with the assumption that all energy generation sources would be relatively controllable, and grid operators would always be able to predict when and where those sources would be located. With the addition of renewable energy sources like wind and solar, which can be installed faster than traditional generation technologies, this is no longer the case. Furthermore, the fact that renewable energy sources are imperfectly predictable means that the grid has to adapt in real-time to changing patterns of power flow. We need a dynamic grid that is far more flexible. This video highlights three ARPA-E-funded approaches to improving the grid's flexibility: topology control software from Boston University that optimizes power flow, gas tube switches from General Electric that provide efficient power conversion, and flow batteries from Harvard University that offer grid-scale energy storage.

  9. An interactive grid generation technique for turbomachinery

    NASA Technical Reports Server (NTRS)

    Beach, Tim

    1992-01-01

    A combination algebraic/elliptic technique is presented for the generation of 3-D grids about turbomachinery blade rows for both axial and radial flow machinery. The technique is build around use of an advanced engineering workstation to construct several 2-D grids interactively on predetermined blade-to-blade surfaces. A 3-D grid is generated by interpolating these surface grids onto an axisymmetric grid. On each blade to blade surface, a grid is created using algebraic techniques near the blade to control orthogonality within the boundary layer region and elliptic techniques in the mid-passage to achieve smoothness. The interactive definition of bezier curves as internal boundaries is the key to simple construction. The approach is adapted for use with the average passage solution technique, although this is not a limitation for most other uses. A variety of examples are presented.

  10. ARPA-E: Advancing the Electric Grid

    SciTech Connect

    Lemmon, John; Ruiz, Pablo; Sommerer, Tim; Aziz, Michael

    2014-02-24

    The electric grid was designed with the assumption that all energy generation sources would be relatively controllable, and grid operators would always be able to predict when and where those sources would be located. With the addition of renewable energy sources like wind and solar, which can be installed faster than traditional generation technologies, this is no longer the case. Furthermore, the fact that renewable energy sources are imperfectly predictable means that the grid has to adapt in real-time to changing patterns of power flow. We need a dynamic grid that is far more flexible. This video highlights three ARPA-E-funded approaches to improving the grid's flexibility: topology control software from Boston University that optimizes power flow, gas tube switches from General Electric that provide efficient power conversion, and flow batteries from Harvard University that offer grid-scale energy storage.

  11. TCGRID: A three dimensional C-grid generator for turbomachinery

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.

    1992-01-01

    A fast 3-D grid code for turbomachinery was developed. The code, TCGRID (Turbomachinery C-GRID), can generate either C or H type grids for fairly arbitrary axial or radical turbomachinery geometries. The code also has limited blocked grid capability and can generate an axisymmetric H type grid upstream of the blade row or an O type grid within the tip clearance region. Hub and tip geometries are input as a simple list of pairs. All geometric data is handled using parametric splines so that geometries that turn 90 degrees can be handled without difficulty. Blade input is in standard MERIDL or Lewis compressor design code format. TCGRID adds leading and trailing edge circles to MERIDL geometries and intersects the blade with the hub and tip if necessary using a novel intersection algorithm. The procedure used to generate the grid is given. Output is in PLOT3D format, which can also be read by the RVC3D (Rotor Viscous Code 3-D) Navier-Stokes code for turbomachinery. Intermediate 2-D or 3-D grids useful for debug and other purposes can also be output using a convenient output flag. A grid generated figure is given.

  12. Development of Three-Dimensional DRAGON Grid Technology

    NASA Technical Reports Server (NTRS)

    Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.

    1999-01-01

    For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.

  13. DEMOCRITUS code: A kinetic approach to the simulation of complex plasmas

    NASA Astrophysics Data System (ADS)

    Arinaminpat, Nimlan; Fichtl, Chris; Patacchini, Leonardo; Lapenta, Giovanni; Delzanno, Gian Luca

    2006-10-01

    The DEMOCRITUS code is a particle-based code for plasma-material interaction simulation. The code makes use of particle in cell (PIC) methods to simulate each plasma species, the material, and their interaction. In this study, we concentrate on a dust particle immersed in a plasma. We start with the simplest case, in which the dust particle is not allowed to emit. From here, we expand the DEMOCRITUS code to include thermionic and photo emission algorithms and obtain our data. Next we expand the physics processes present to include the presence of magnetic fields and collisional processes with a neutral gas. Finally we describe new improvements of the code including a new mover that allows for particle subcycling and a new grid adaptation approach.

  14. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  15. Using Grid Cells for Navigation

    PubMed Central

    Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil

    2015-01-01

    Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860

  16. Grid of Supergiant B[e] Models from HDUST Radiative Transfer

    NASA Astrophysics Data System (ADS)

    Domiciano de Souza, A.; Carciofi, A. C.

    2012-12-01

    By using the Monte Carlo radiative transfer code HDUST (developed by A. C. Carciofi and J..E. Bjorkman) we have built a grid of models for stars presenting the B[e] phenomenon and a bimodal outflowing envelope. The models are particularly adapted to the study of B[e] supergiants and FS CMa type stars. The adopted physical parameters of the calculated models make the grid well adapted to interpret high angular and high spectral observations, in particular spectro-interferometric data from ESO-VLTI instruments AMBER (near-IR at low and medium spectral resolution) and MIDI (mid-IR at low spectral resolution). The grid models include, for example, a central B star with different effective temperatures, a gas (hydrogen) and silicate dust circumstellar envelope with a bimodal mass loss presenting dust in the denser equatorial regions. The HDUST grid models were pre-calculated using the high performance parallel computing facility Mésocentre SIGAMM, located at OCA, France.

  17. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  18. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  19. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  20. Grid Convergence for Turbulent Flows(Invited)

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Rumsey, Christopher L.; Schwoppe, Axel

    2015-01-01

    A detailed grid convergence study has been conducted to establish accurate reference solutions corresponding to the one-equation linear eddy-viscosity Spalart-Allmaras turbulence model for two dimensional turbulent flows around the NACA 0012 airfoil and a flat plate. The study involved three widely used codes, CFL3D (NASA), FUN3D (NASA), and TAU (DLR), and families of uniformly refined structured grids that differ in the grid density patterns. Solutions computed by different codes on different grid families appear to converge to the same continuous limit, but exhibit different convergence characteristics. The grid resolution in the vicinity of geometric singularities, such as a sharp trailing edge, is found to be the major factor affecting accuracy and convergence of discrete solutions, more prominent than differences in discretization schemes and/or grid elements. The results reported for these relatively simple turbulent flows demonstrate that CFL3D, FUN3D, and TAU solutions are very accurate on the finest grids used in the study, but even those grids are not sufficient to conclusively establish an asymptotic convergence order.

  1. Automatic structured grid generation using Gridgen (some restrictions apply)

    NASA Technical Reports Server (NTRS)

    Chawner, John R.; Steinbrenner, John P.

    1995-01-01

    The authors have noticed in the recent grid generation literature an emphasis on the automation of structured grid generation. The motivation behind such work is clear; grid generation is easily the most despised task in the grid-analyze-visualize triad of computational analysis (CA). However, because grid generation is closely coupled to both the design and analysis software and because quantitative measures of grid quality are lacking, 'push button' grid generation usually results in a compromise between speed, control, and quality. Overt emphasis on automation obscures the substantive issues of providing users with flexible tools for generating and modifying high quality grids in a design environment. In support of this paper's tongue-in-cheek title, many features of the Gridgen software are described. Gridgen is by no stretch of the imagination an automatic grid generator. Despite this fact, the code does utilize many automation techniques that permit interesting regenerative features.

  2. Nurbs and grid generation

    SciTech Connect

    Barnhill, R.E.; Farin, G.; Hamann, B.

    1995-12-31

    This paper provides a basic overview of NURBS and their application to numerical grid generation. Curve/surface smoothing, accelerated grid generation, and the use of NURBS in a practical grid generation system are discussed.

  3. AstroGrid-D: Grid technology for astronomical science

    NASA Astrophysics Data System (ADS)

    Enke, Harry; Steinmetz, Matthias; Adorf, Hans-Martin; Beck-Ratzka, Alexander; Breitling, Frank; Brüsemeister, Thomas; Carlson, Arthur; Ensslin, Torsten; Högqvist, Mikael; Nickelt, Iliya; Radke, Thomas; Reinefeld, Alexander; Reiser, Angelika; Scholl, Tobias; Spurzem, Rainer; Steinacker, Jürgen; Voges, Wolfgang; Wambsganß, Joachim; White, Steve

    2011-02-01

    We present status and results of AstroGrid-D, a joint effort of astrophysicists and computer scientists to employ grid technology for scientific applications. AstroGrid-D provides access to a network of distributed machines with a set of commands as well as software interfaces. It allows simple use of computer and storage facilities and to schedule or monitor compute tasks and data management. It is based on the Globus Toolkit middleware (GT4). Chapter 1 describes the context which led to the demand for advanced software solutions in Astrophysics, and we state the goals of the project. We then present characteristic astrophysical applications that have been implemented on AstroGrid-D in chapter 2. We describe simulations of different complexity, compute-intensive calculations running on multiple sites (Section 2.1), and advanced applications for specific scientific purposes (Section 2.2), such as a connection to robotic telescopes (Section 2.2.3). We can show from these examples how grid execution improves e.g. the scientific workflow. Chapter 3 explains the software tools and services that we adapted or newly developed. Section 3.1 is focused on the administrative aspects of the infrastructure, to manage users and monitor activity. Section 3.2 characterises the central components of our architecture: The AstroGrid-D information service to collect and store metadata, a file management system, the data management system, and a job manager for automatic submission of compute tasks. We summarise the successfully established infrastructure in chapter 4, concluding with our future plans to establish AstroGrid-D as a platform of modern e-Astronomy.

  4. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    SciTech Connect

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  5. Data Grid Management Systems

    NASA Technical Reports Server (NTRS)

    Moore, Reagan W.; Jagatheesan, Arun; Rajasekar, Arcot; Wan, Michael; Schroeder, Wayne

    2004-01-01

    The "Grid" is an emerging infrastructure for coordinating access across autonomous organizations to distributed, heterogeneous computation and data resources. Data grids are being built around the world as the next generation data handling systems for sharing, publishing, and preserving data residing on storage systems located in multiple administrative domains. A data grid provides logical namespaces for users, digital entities and storage resources to create persistent identifiers for controlling access, enabling discovery, and managing wide area latencies. This paper introduces data grids and describes data grid use cases. The relevance of data grids to digital libraries and persistent archives is demonstrated, and research issues in data grids and grid dataflow management systems are discussed.

  6. Grid Generation Issues and CFD Simulation Accuracy for the X33 Aerothermal Simulations

    NASA Technical Reports Server (NTRS)

    Polsky, Susan; Papadopoulos, Periklis; Davies, Carol; Loomis, Mark; Prabhu, Dinesh; Langhoff, Stephen R. (Technical Monitor)

    1997-01-01

    Grid generation issues relating to the simulation of the X33 aerothermal environment using the GASP code are explored. Required grid densities and normal grid stretching are discussed with regards to predicting the fluid dynamic and heating environments with the desired accuracy. The generation of volume grids is explored and includes discussions of structured grid generation packages such as GRIDGEN, GRIDPRO and HYPGEN. Volume grid manipulation techniques for obtaining desired outer boundary and grid clustering using the OUTBOUND code are examined. The generation of the surface grid with the required surface grid with the required surface grid topology is also discussed. Utilizing grids without singular axes is explored as a method of avoiding numerical difficulties at the singular line.

  7. Grid Stiffened Structure Analysis Tool

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The Grid Stiffened Analysis Tool contract is contract performed by Boeing under NASA purchase order H30249D. The contract calls for a "best effort" study comprised of two tasks: (1) Create documentation for a composite grid-stiffened structure analysis tool, in the form of a Microsoft EXCEL spread sheet, that was developed by originally at Stanford University and later further developed by the Air Force, and (2) Write a program that functions as a NASTRAN pre-processor to generate an FEM code for grid-stiffened structure. In performing this contract, Task 1 was given higher priority because it enables NASA to make efficient use of a unique tool they already have; Task 2 was proposed by Boeing because it also would be beneficial to the analysis of composite grid-stiffened structures, specifically in generating models for preliminary design studies. The contract is now complete, this package includes copies of the user's documentation for Task 1 and a CD ROM & diskette with an electronic copy of the user's documentation and an updated version of the "GRID 99" spreadsheet.

  8. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  9. NURBS evaluation and utilization for grid generation

    SciTech Connect

    Yu, Tzu-Yi; Soni, B.K.

    1996-12-31

    In the last few years, Non-Uniform Rational BSpline (NURBS) has evolved as an essential tool for a semi-analytical representation of geometrical entities encountered in Computational Field Simulation (CFS). The grid generation techniques based on NURBS have been developed and reported in the literature by various researchers. However, the evaluation of NURBS for surface/volume grid point generation is time consuming and the representation of widely utilized aerodynamic shapes into NURBS is not trivial. This paper addressed these issues. An enhanced algorithm for NURBS evaluation based on the proper utilization of the basis functions is presented. An accurate representation of the widely utilized transition duct designed by using superellipse equation is developed. An example of the NURBS surface definition to a 3D volume and its utilization in grid adaptation by combing NURBS with elliptic generation system is presented. The computational example involving a flow field around a generic missile configuration is presented for demonstrating grid adaptation.

  10. RAMSES: A new N-body and hydrodynamical code

    NASA Astrophysics Data System (ADS)

    Teyssier, Romain

    2010-11-01

    A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

  11. A perspective on unstructured grid flow solvers

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.

    1995-01-01

    This survey paper assesses the status of compressible Euler and Navier-Stokes solvers on unstructured grids. Different spatial and temporal discretization options for steady and unsteady flows are discussed. The integration of these components into an overall framework to solve practical problems is addressed. Issues such as grid adaptation, higher order methods, hybrid discretizations and parallel computing are briefly discussed. Finally, some outstanding issues and future research directions are presented.

  12. Cloud feedback studies with a physics grid

    SciTech Connect

    Dipankar, Anurag; Stevens, Bjorn

    2013-02-07

    During this project the investigators implemented a fully parallel version of dual-grid approach in main frame code ICON, implemented a fully conservative first-order interpolation scheme for horizontal remapping, integrated UCLA-LES micro-scale model into ICON to run parallely in selected columns, and did cloud feedback studies on aqua-planet setup to evaluate the classical parameterization on a small domain. The micro-scale model may be run in parallel with the classical parameterization, or it may be run on a "physics grid" independent of the dynamics grid.

  13. Spatial services grid

    NASA Astrophysics Data System (ADS)

    Cao, Jian; Li, Qi; Cheng, Jicheng

    2005-10-01

    This paper discusses the concept, key technologies and main application of Spatial Services Grid. The technologies of Grid computing and Webservice is playing a revolutionary role in studying the spatial information services. The concept of the SSG (Spatial Services Grid) is put forward based on the SIG (Spatial Information Grid) and OGSA (open grid service architecture). Firstly, the grid computing is reviewed and the key technologies of SIG and their main applications are reviewed. Secondly, the grid computing and three kinds of SIG (in broad sense)--SDG (spatial data grid), SIG (spatial information grid) and SSG (spatial services grid) and their relationships are proposed. Thirdly, the key technologies of the SSG (spatial services grid) is put forward. Finally, three representative applications of SSG (spatial services grid) are discussed. The first application is urban location based services gird, which is a typical spatial services grid and can be constructed on OGSA (Open Grid Services Architecture) and digital city platform. The second application is region sustainable development grid which is the key to the urban development. The third application is Region disaster and emergency management services grid.

  14. Unstructured grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Morgan, K.; Peraire, J.; Peiro, J.

    1992-01-01

    The implementation of the finite element method on unstructured triangular grids is described and the development of centered finite element schemes for the solution of the compressible Euler equation on general triangular and tetrahedral grids is discussed. Explicit and implicit Lax-Wendroff type methods and a method based upon the use of explicit multistep timestepping are considered. In the latter case, the convergence behavior of the method is accelerated by the incorporation of a fully unstructured multigrid procedure. The advancing front method for generating unstructured grids of triangles and tetrahedra is described and the application of adaptive mesh techniques to both steady and transient flow analysis is illustrated.

  15. Three-dimensional elliptic grid generation for an F-16

    NASA Technical Reports Server (NTRS)

    Sorenson, Reese L.

    1988-01-01

    A case history depicting the effort to generate a computational grid for the simulation of transonic flow about an F-16 aircraft at realistic flight conditions is presented. The flow solver for which this grid is designed is a zonal one, using the Reynolds averaged Navier-Stokes equations near the surface of the aircraft, and the Euler equations in regions removed from the aircraft. A body conforming global grid, suitable for the Euler equation, is first generated using 3-D Poisson equations having inhomogeneous terms modeled after the 2-D GRAPE code. Regions of the global grid are then designated for zonal refinement as appropriate to accurately model the flow physics. Grid spacing suitable for solution of the Navier-Stokes equations is generated in the refinement zones by simple subdivision of the given coarse grid intervals. That grid generation project is described, with particular emphasis on the global coarse grid.

  16. Interactive grid generation for turbomachinery flow field simulations

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Reno, Charles; Eiseman, Peter R.

    1988-01-01

    The control point form of algebraic grid generation presented provides the means that are needed to generate well structured grids of turbomachinery flow simulations. It uses a sparse collection of control points distributed over the flow domain. The shape and position of coordinate curves can be adjusted from these control points while the grid conforms precisely to all boundaries. An interactive program called TURBO, which uses the control point form, is being developed. Basic features of the code are discussed and sample grids are presented. A finite volume LU implicit scheme is used to simulate flow in a turbine cascade on the grid generated by the program.

  17. Interactive grid generation for turbomachinery flow field simulations

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Eiseman, Peter R.; Reno, Charles

    1988-01-01

    The control point form of algebraic grid generation presented provides the means that are needed to generate well structured grids for turbomachinery flow simulations. It uses a sparse collection of control points distributed over the flow domain. The shape and position of coordinate curves can be adjusted from these control points while the grid conforms precisely to all boundaries. An interactive program called TURBO, which uses the control point form, is being developed. Basic features of the code are discussed and sample grids are presented. A finite volume LU implicit scheme is used to simulate flow in a turbine cascade on the grid generated by the program.

  18. The Feasibility of Adaptive Unstructured Computations On Petaflops Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Heber, Gerd; Gao, Guang; Saini, Subhash (Technical Monitor)

    1999-01-01

    This viewgraph presentation covers the advantages of mesh adaptation, unstructured grids, and dynamic load balancing. It illustrates parallel adaptive communications, and explains PLUM (Parallel dynamic load balancing for adaptive unstructured meshes), and PSAW (Proper Self Avoiding Walks).

  19. Noiseless Coding Of Magnetometer Signals

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1989-01-01

    Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.

  20. Visualization of grids conforming to geological structures: a topological approach

    NASA Astrophysics Data System (ADS)

    Caumon, Guillaume; Lévy, Bruno; Castanié, Laurent; Paul, Jean-Claude

    2005-07-01

    Flexible grids are used in many Geoscience applications because they can accurately adapt to the great diversity of shapes encountered in nature. These grids raise a number difficult challenges, in particular for fast volume visualization. We propose a generic incremental slicing algorithm for versatile visualization of unstructured grids, these being constituted of arbitrary convex cells. The tradeoff between the complexity of the grid and the efficiency of the method is addressed by special-purpose data structures and customizations. A general structure based on oriented edges is defined to address the general case. When only a limited number of polyhedron types is present in the grid (zoo grids), memory usage and rendering time are reduced by using a catalog of cell types generated automatically. This data structure is further optimized to deal with stratigraphic grids made of hexahedral cells. The visualization method is applied to several gridded subsurface models conforming to geological structures.