Science.gov

Sample records for adaptive grid algorithm

  1. An adaptive grid algorithm for one-dimensional nonlinear equations

    NASA Technical Reports Server (NTRS)

    Gutierrez, William E.; Hills, Richard G.

    1990-01-01

    Richards' equation, which models the flow of liquid through unsaturated porous media, is highly nonlinear and difficult to solve. Step gradients in the field variables require the use of fine grids and small time step sizes. The numerical instabilities caused by the nonlinearities often require the use of iterative methods such as Picard or Newton interation. These difficulties result in large CPU requirements in solving Richards equation. With this in mind, adaptive and multigrid methods are investigated for use with nonlinear equations such as Richards' equation. Attention is focused on one-dimensional transient problems. To investigate the use of multigrid and adaptive grid methods, a series of problems are studied. First, a multigrid program is developed and used to solve an ordinary differential equation, demonstrating the efficiency with which low and high frequency errors are smoothed out. The multigrid algorithm and an adaptive grid algorithm is used to solve one-dimensional transient partial differential equations, such as the diffusive and convective-diffusion equations. The performance of these programs are compared to that of the Gauss-Seidel and tridiagonal methods. The adaptive and multigrid schemes outperformed the Gauss-Seidel algorithm, but were not as fast as the tridiagonal method. The adaptive grid scheme solved the problems slightly faster than the multigrid method. To solve nonlinear problems, Picard iterations are introduced into the adaptive grid and tridiagonal methods. Burgers' equation is used as a test problem for the two algorithms. Both methods obtain solutions of comparable accuracy for similar time increments. For the Burgers' equation, the adaptive grid method finds the solution approximately three times faster than the tridiagonal method. Finally, both schemes are used to solve the water content formulation of the Richards' equation. For this problem, the adaptive grid method obtains a more accurate solution in fewer work units and

  2. A geometry-based adaptive unstructured grid generation algorithm for complex geological media

    NASA Astrophysics Data System (ADS)

    Bahrainian, Seyed Saied; Dezfuli, Alireza Daneh

    2014-07-01

    In this paper a novel unstructured grid generation algorithm is presented that considers the effect of geological features and well locations in grid resolution. The proposed grid generation algorithm presents a strategy for definition and construction of an initial grid based on the geological model, geometry adaptation of geological features, and grid resolution control. The algorithm is applied to seismotectonic map of the Masjed-i-Soleiman reservoir. Comparison of grid results with the “Triangle” program shows a more suitable permeability contrast. Immiscible two-phase flow solutions are presented for a fractured porous media test case using different grid resolutions. Adapted grid on the fracture geometry gave identical results with that of a fine grid. The adapted grid employed 88.2% less CPU time when compared to the solutions obtained by the fine grid.

  3. SIMULATION OF DISPERSION OF A POWER PLANT PLUME USING AN ADAPTIVE GRID ALGORITHM

    EPA Science Inventory

    A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme?the piecewise parabolic method (PPM)?for computing advective solution fields; a weight function capable of promoting grid node clustering ...

  4. A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids

    NASA Technical Reports Server (NTRS)

    Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.

    1993-01-01

    Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.

  5. An Adaptive Reputation-Based Algorithm for Grid Virtual Organization Formation

    NASA Astrophysics Data System (ADS)

    Cui, Yongrui; Li, Mingchu; Ren, Yizhi; Sakurai, Kouichi

    A novel adaptive reputation-based virtual organization formation is proposed. It restrains the bad performers effectively based on the consideration of the global experience of the evaluator and evaluates the direct trust relation between two grid nodes accurately by consulting the previous trust value rationally. It also consults and improves the reputation evaluation process in PathTrust model by taking account of the inter-organizational trust relationship and combines it with direct and recommended trust in a weighted way, which makes the algorithm more robust against collusion attacks. Additionally, the proposed algorithm considers the perspective of the VO creator and takes required VO services as one of the most important fine-grained evaluation criterion, which makes the algorithm more suitable for constructing VOs in grid environments that include autonomous organizations. Simulation results show that our algorithm restrains the bad performers and resists against fake transaction attacks and badmouth attacks effectively. It provides a clear advantage in the design of a VO infrastructure.

  6. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  7. Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2006-01-01

    Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.

  8. A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Quarteroni, Alfio

    2015-10-01

    In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.

  9. Structured adaptive grid generation using algebraic methods

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, Bharat K.; Roger, R. P.; Chan, Stephen C.

    1993-01-01

    The accuracy of the numerical algorithm depends not only on the formal order of approximation but also on the distribution of grid points in the computational domain. Grid adaptation is a procedure which allows optimal grid redistribution as the solution progresses. It offers the prospect of accurate flow field simulations without the use of an excessively timely, computationally expensive, grid. Grid adaptive schemes are divided into two basic categories: differential and algebraic. The differential method is based on a variational approach where a function which contains a measure of grid smoothness, orthogonality and volume variation is minimized by using a variational principle. This approach provided a solid mathematical basis for the adaptive method, but the Euler-Lagrange equations must be solved in addition to the original governing equations. On the other hand, the algebraic method requires much less computational effort, but the grid may not be smooth. The algebraic techniques are based on devising an algorithm where the grid movement is governed by estimates of the local error in the numerical solution. This is achieved by requiring the points in the large error regions to attract other points and points in the low error region to repel other points. The development of a fast, efficient, and robust algebraic adaptive algorithm for structured flow simulation applications is presented. This development is accomplished in a three step process. The first step is to define an adaptive weighting mesh (distribution mesh) on the basis of the equidistribution law applied to the flow field solution. The second, and probably the most crucial step, is to redistribute grid points in the computational domain according to the aforementioned weighting mesh. The third and the last step is to reevaluate the flow property by an appropriate search/interpolate scheme at the new grid locations. The adaptive weighting mesh provides the information on the desired concentration

  10. Grid quality improvement by a grid adaptation technique

    NASA Technical Reports Server (NTRS)

    Lee, K. D.; Henderson, T. L.; Choo, Y. K.

    1991-01-01

    A grid adaptation technique is presented which improves grid quality. The method begins with an assessment of grid quality by defining an appropriate grid quality measure. Then, undesirable grid properties are eliminated by a grid-quality-adaptive grid generation procedure. The same concept has been used for geometry-adaptive and solution-adaptive grid generation. The difference lies in the definition of the grid control sources; here, they are extracted from the distribution of a particular grid property. Several examples are presented to demonstrate the versatility and effectiveness of the method.

  11. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  12. LAPS Grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Pagliantini, Cecilia; Delzanno, Gia Luca; Guo, Zehua; Srinivasan, Bhuvana; Tang, Xianzhu; Chacon, Luis

    2011-10-01

    LAPS uses a common-data framework in which a general purpose grid generation and adaptation package in toroidal and simply connected domains is implemented. The initial focus is on implementing the Winslow/Laplace-Beltrami method for generating non-overlapping block structured grids. This is to be followed by a grid adaptation scheme based on Monge-Kantorovich optimal transport method [Delzanno et al., J. Comput. Phys,227 (2008), 9841-9864], that equidistributes application-specified error. As an initial set of applications, we will lay out grids for an axisymmetric mirror, a field reversed configuration, and an entire poloidal cross section of a tokamak plasma reconstructed from a CMOD experimental shot. These grids will then be used for computing the plasma equilibrium and transport in accompanying presentations. A key issue for Monge-Kantorovich grid optimization is the choice of error or monitor function for equi-distribution. We will compare the Operator Recovery Error Source Detector (ORESD) [Lapenta, Int. J. Num. Meth. Eng,59 (2004) 2065-2087], the Tau method and a strategy based on the grid coarsening [Zhang et al., AIAA J,39 (2001) 1706-1715] to find an ``optimal'' grid. Work supported by DOE OFES.

  13. Load Balancing Sequences of Unstructured Adaptive Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured grid computations but causes load imbalance on multiprocessor systems. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. This paper makes several important additions to our previous work. First, a new remapping cost model is presented and empirically validated on an SP2. Next, our load balancing strategy is applied to sequences of dynamically adapted unstructured grids. Results indicate that our framework is effective on many processors for both steady and unsteady problems with several levels of adaption. Additionally, we demonstrate that a coarse starting mesh produces high quality load balancing, at a fraction of the cost required for a fine initial mesh. Finally, we show that the data remapping overhead can be significantly reduced by applying our heuristic processor reassignment algorithm.

  14. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  15. Near-Body Grid Adaption for Overset Grids

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2016-01-01

    A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.

  16. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  17. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundaray interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  18. Conservative treatment of boundary interfaces for overlaid grids and multi-level grid adaptations

    NASA Technical Reports Server (NTRS)

    Moon, Young J.; Liou, Meng-Sing

    1989-01-01

    Conservative algorithms for boundary interfaces of overlaid grids are presented. The basic method is zeroth order, and is extended to a higher order method using interpolation and subcell decomposition. The present method, strictly based on a conservative constraint, is tested with overlaid grids for various applications of unsteady and steady supersonic inviscid flows with strong shock waves. The algorithm is also applied to a multi-level grid adaptation in which the next level finer grid is overlaid on the coarse base grid with an arbitrary orientation.

  19. Interactive solution-adaptive grid generation

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Henderson, Todd L.

    1992-01-01

    TURBO-AD is an interactive solution-adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution-adaptive grid generation technique into a single interactive solution-adaptive grid generation package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties that had been encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on a unit square in the parametric domain, and the new adapted grid in the parametric domain is then mapped back to the physical domain. The grid adaptation is achieved by first adapting the control points to a numerical solution in the parametric domain using control sources obtained from flow properties. Then a new modified grid is generated from the adapted control net. This solution-adaptive grid generation process is efficient because the number of control points is much less than the number of grid points and the generation of a new grid from the adapted control net is an efficient algebraic process. TURBO-AD provides the user with both local and global grid controls.

  20. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  1. Adaptive grid embedding for the two-dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Warren, Gary P.

    1990-01-01

    A numerical algorithm is presented for solving the two-dimensional flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for an NACA 0012 airfoil in a freestream with Mach numbers of 0.95 and 1.054. Excellent resolution of the shock structures is obtained with the adaptive grid embedding method with significantly fewer grid points than the comparable structured grid.

  2. INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL

    EPA Science Inventory

    The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...

  3. Solving Fluid Flow Problems on Moving and Adaptive Overlapping Grids

    SciTech Connect

    Henshaw, W

    2005-07-28

    Solution of fluid dynamics problems on overlapping grids will be discussed. An overlapping grid consists of a set of structured component grids that cover a domain and overlap where they meet. Overlapping grids provide an effective approach for developing efficient and accurate approximations for complex, possibly moving geometry. Topics to be addressed include the reactive Euler equations, the incompressible Navier-Stokes equations and elliptic equations solved with a multigrid algorithm. Recent developments coupling moving grids and adaptive mesh refinement and preliminary parallel results will also be presented.

  4. An Adaptive VOF Method on Unstructured Grid

    NASA Astrophysics Data System (ADS)

    Wu, L. L.; Huang, M.; Chen, B.

    2011-09-01

    In order to improve the accuracy of interface capturing and keeping the computational efficiency, an adaptive VOF method on unstructured grid is proposed in this paper. The volume fraction in each cell is regarded as the criterion to locally refine the interface cell. With the movement of interface, new interface cells (0 ≤ f ≤ 1) are subdivided into child cells, while those child cells that no longer contain interface will be merged back into the original parent cell. In order to avoid the complicated redistribution of volume fraction during the subdivision and amalgamation procedure, a predictor-corrector algorithm is proposed to implement the subdivision and amalgamation procedures only in empty or full cell ( f = 0 or 1). Thus volume fraction in the new cell can take the value from the original cell directly, and the interpolation of the interface is avoided. The advantage of this method is that the re-generation of the whole grid system is not necessary, so its implementation is very efficient. Moreover, an advection flow test of a hollow square was performed, and the relative shape error of the result obtained by adaptive mesh is smaller than those by non-refined grid, which verifies the validation of our method.

  5. Adaptive link selection algorithms for distributed estimation

    NASA Astrophysics Data System (ADS)

    Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent

    2015-12-01

    This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.

  6. Interactive solution-adaptive grid generation procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Todd L.; Choo, Yung K.; Lee, Ki D.

    1992-01-01

    TURBO-AD is an interactive solution adaptive grid generation program under development. The program combines an interactive algebraic grid generation technique and a solution adaptive grid generation technique into a single interactive package. The control point form uses a sparse collection of control points to algebraically generate a field grid. This technique provides local grid control capability and is well suited to interactive work due to its speed and efficiency. A mapping from the physical domain to a parametric domain was used to improve difficulties encountered near outwardly concave boundaries in the control point technique. Therefore, all grid modifications are performed on the unit square in the parametric domain, and the new adapted grid is then mapped back to the physical domain. The grid adaption is achieved by adapting the control points to a numerical solution in the parametric domain using control sources obtained from the flow properties. Then a new modified grid is generated from the adapted control net. This process is efficient because the number of control points is much less than the number of grid points and the generation of the grid is an efficient algebraic process. TURBO-AD provides the user with both local and global controls.

  7. Conservative Smoothing on an Adaptive Quadrilateral Grid

    NASA Astrophysics Data System (ADS)

    Sun, M.; Takayama, K.

    1999-03-01

    The Lax-Wendroff scheme can be freed of spurious oscillations by introducing conservative smoothing. In this paper the approach is first tested in 1-D modeling equations and then extended to multidimensional flows by the finite volume method. The scheme is discretized by a space-splitting method on an adaptive quadrilateral grid. The artificial viscosity coefficients in the conservative smoothing step are specially designed to capture slipstreams and vortices. Algorithms are programmed using a vectorizable data structure, under which not only the flow solver but also the adaptation procedure is well vectorized. The good resolution and high efficiency of the approach are demonstrated in calculating both unsteady and steady compressible flows with either weak or strong shock waves.

  8. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  9. An adaptive grid with directional control

    NASA Technical Reports Server (NTRS)

    Brackbill, J. U.

    1993-01-01

    An adaptive grid generator for adaptive node movement is here derived by combining a variational formulation of Winslow's (1981) variable-diffusion method with a directional control functional. By applying harmonic-function theory, it becomes possible to define conditions under which there exist unique solutions of the resulting elliptic equations. The results obtained for the grid generator's application to the complex problem posed by the fluid instability-driven magnetic field reconnection demonstrate one-tenth the computational cost of either a Eulerian grid or an adaptive grid without directional control.

  10. Grid data extraction algorithm for ship routing

    NASA Astrophysics Data System (ADS)

    Li, Yuankui; Zhang, Yingjun; Yue, Xingwang; Gao, Zongjiang

    2015-05-01

    With the aim of extracting environmental data around routes, as the basis of ship routing optimization and other related studies, this paper, taking wind grid data as an example, proposes an algorithm that can effectively extract the grid data around rhumb lines. According to different ship courses, the algorithm calculates the wind grid index values in eight different situations, and a common computational formula is summarised. The wind grids around a ship route can be classified into `best-fitting' grids and `additional' grids, which are stored in such a way that, for example, when the data has a high-spacing resolution, only the `best-fitting' grids around ship routes are extracted. Finally, the algorithm was implemented and simulated with MATLAB programming. As the simulation results indicate, the algorithm designed in this paper achieved wind grid data extraction in different situations and further resolved the extraction problem of meteorological and hydrogeological field grids around ship routes efficiently. Thus, it can provide a great support for optimal ship routing related to meteorological factors.

  11. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  12. The fundamentals of adaptive grid movement

    NASA Technical Reports Server (NTRS)

    Eiseman, Peter R.

    1990-01-01

    Basic grid point movement schemes are studied. The schemes are referred to as adaptive grids. Weight functions and equidistribution in one dimension are treated. The specification of coefficients in the linear weight, attraction to a given grid or a curve, and evolutionary forces are considered. Curve by curve and finite volume methods are described. The temporal coupling of partial differential equations solvers and grid generators was discussed.

  13. Parallel algorithms for dynamically partitioning unstructured grids

    SciTech Connect

    Diniz, P.; Plimpton, S.; Hendrickson, B.; Leland, R.

    1994-10-01

    Grid partitioning is the method of choice for decomposing a wide variety of computational problems into naturally parallel pieces. In problems where computational load on the grid or the grid itself changes as the simulation progresses, the ability to repartition dynamically and in parallel is attractive for achieving higher performance. We describe three algorithms suitable for parallel dynamic load-balancing which attempt to partition unstructured grids so that computational load is balanced and communication is minimized. The execution time of algorithms and the quality of the partitions they generate are compared to results from serial partitioners for two large grids. The integration of the algorithms into a parallel particle simulation is also briefly discussed.

  14. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  15. Self-Avoiding Walks Over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1999-01-01

    Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.

  16. Fast adaptive composite grid methods on distributed parallel architectures

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Quinlan, Daniel

    1992-01-01

    The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.

  17. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  18. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  19. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  20. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  1. Adaptive mesh and algorithm refinement using direct simulation Monte Carlo

    SciTech Connect

    Garcia, A.L.; Bell, J.B.; Crutchfield, W.Y.; Alder, B.J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  2. Dynamic Load Balancing for Adaptive Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.

  3. Adaptive refinement tools for tetrahedral unstructured grids

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)

    2011-01-01

    An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.

  4. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  5. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  6. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  7. A Multilevel Algorithm for the Solution of Second Order Elliptic Differential Equations on Sparse Grids

    NASA Technical Reports Server (NTRS)

    Pflaum, Christoph

    1996-01-01

    A multilevel algorithm is presented that solves general second order elliptic partial differential equations on adaptive sparse grids. The multilevel algorithm consists of several V-cycles. Suitable discretizations provide that the discrete equation system can be solved in an efficient way. Numerical experiments show a convergence rate of order Omicron(1) for the multilevel algorithm.

  8. Adaptive protection algorithm and system

    DOEpatents

    Hedrick, Paul [Pittsburgh, PA; Toms, Helen L [Irwin, PA; Miller, Roger M [Mars, PA

    2009-04-28

    An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.

  9. Self-Avoiding Walks over Adaptive Triangular Grids

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.

  10. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  11. Adaptive color image watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, Qiwei

    2008-03-01

    As a major method for intellectual property right protecting, digital watermarking techniques have been widely studied and used. But due to the problems of data amount and color shifted, watermarking techniques on color image was not so widespread studied, although the color image is the principal part for multi-medium usages. Considering the characteristic of Human Visual System (HVS), an adaptive color image watermarking algorithm is proposed in this paper. In this algorithm, HSI color model was adopted both for host and watermark image, the DCT coefficient of intensity component (I) of the host color image was used for watermark date embedding, and while embedding watermark the amount of embedding bit was adaptively changed with the complex degree of the host image. As to the watermark image, preprocessing is applied first, in which the watermark image is decomposed by two layer wavelet transformations. At the same time, for enhancing anti-attack ability and security of the watermarking algorithm, the watermark image was scrambled. According to its significance, some watermark bits were selected and some watermark bits were deleted as to form the actual embedding data. The experimental results show that the proposed watermarking algorithm is robust to several common attacks, and has good perceptual quality at the same time.

  12. Adaptive grid methods for RLV environment assessment and nozzle analysis

    NASA Technical Reports Server (NTRS)

    Thornburg, Hugh J.

    1996-01-01

    Rapid access to highly accurate data about complex configurations is needed for multi-disciplinary optimization and design. In order to efficiently meet these requirements a closer coupling between the analysis algorithms and the discretization process is needed. In some cases, such as free surface, temporally varying geometries, and fluid structure interaction, the need is unavoidable. In other cases the need is to rapidly generate and modify high quality grids. Techniques such as unstructured and/or solution-adaptive methods can be used to speed the grid generation process and to automatically cluster mesh points in regions of interest. Global features of the flow can be significantly affected by isolated regions of inadequately resolved flow. These regions may not exhibit high gradients and can be difficult to detect. Thus excessive resolution in certain regions does not necessarily increase the accuracy of the overall solution. Several approaches have been employed for both structured and unstructured grid adaption. The most widely used involve grid point redistribution, local grid point enrichment/derefinement or local modification of the actual flow solver. However, the success of any one of these methods ultimately depends on the feature detection algorithm used to determine solution domain regions which require a fine mesh for their accurate representation. Typically, weight functions are constructed to mimic the local truncation error and may require substantial user input. Most problems of engineering interest involve multi-block grids and widely disparate length scales. Hence, it is desirable that the adaptive grid feature detection algorithm be developed to recognize flow structures of different type as well as differing intensity, and adequately address scaling and normalization across blocks. These weight functions can then be used to construct blending functions for algebraic redistribution, interpolation functions for unstructured grid generation

  13. Parallel architectures for iterative methods on adaptive, block structured grids

    NASA Technical Reports Server (NTRS)

    Gannon, D.; Vanrosendale, J.

    1983-01-01

    A parallel computer architecture well suited to the solution of partial differential equations in complicated geometries is proposed. Algorithms for partial differential equations contain a great deal of parallelism. But this parallelism can be difficult to exploit, particularly on complex problems. One approach to extraction of this parallelism is the use of special purpose architectures tuned to a given problem class. The architecture proposed here is tuned to boundary value problems on complex domains. An adaptive elliptic algorithm which maps effectively onto the proposed architecture is considered in detail. Two levels of parallelism are exploited by the proposed architecture. First, by making use of the freedom one has in grid generation, one can construct grids which are locally regular, permitting a one to one mapping of grids to systolic style processor arrays, at least over small regions. All local parallelism can be extracted by this approach. Second, though there may be a regular global structure to the grids constructed, there will be parallelism at this level. One approach to finding and exploiting this parallelism is to use an architecture having a number of processor clusters connected by a switching network. The use of such a network creates a highly flexible architecture which automatically configures to the problem being solved.

  14. Adaptive-grid methods for time-dependent partial differential equations

    SciTech Connect

    Hedstrom, G.W.; Rodrique, G.H.

    1981-01-01

    This paper contains a survey of recent developments of adaptive-grid algorithms for time-dependent partial differential equations. Two lines of research are discussed. One involves the automatic selection of moving grids to follow propagating waves. The other is based on stationary grids but uses local mesh refinement in both space and time. Advantages and disadvantages of both approaches are discussed. The development of adaptive-grid schemes shows promise of greatly increasing our ability to solve problems in several spatial dimensions.

  15. Shape optimization including finite element grid adaptation

    NASA Technical Reports Server (NTRS)

    Kikuchi, N.; Taylor, J. E.

    1984-01-01

    The prediction of optimal shape design for structures depends on having a sufficient level of precision in the computation of structural response. These requirements become critical in situations where the region to be designed includes stress concentrations or unilateral contact surfaces, for example. In the approach to shape optimization discussed here, a means to obtain grid adaptation is incorporated into the finite element procedures. This facility makes it possible to maintain a level of quality in the computational estimate of response that is surely adequate for the shape design problem.

  16. Fast transport simulation with an adaptive grid refinement.

    PubMed

    Haefner, Frieder; Boy, Siegrun

    2003-01-01

    One of the main difficulties in transport modeling and calibration is the extraordinarily long computing times necessary for simulation runs. Improved execution time is a prerequisite for calibration in transport modeling. In this paper we investigate the problem of code acceleration using an adaptive grid refinement, neglecting subdomains, and devising a method by which the Courant condition can be ignored while maintaining accurate solutions. Grid refinement is based on dividing selected cells into regular subcells and including the balance equations of subcells in the equation system. The connection of coarse and refined cells satisfies the mass balance with an interpolation scheme that is implicitly included in the equation system. The refined subdomain can move with the average transport velocity of the subdomain. Very small time steps are required on a fine or a refined grid, because of the combined effect of the Courant and Peclet conditions. Therefore, we have developed a special upwind technique in small grid cells with high velocities (velocity suppression). We have neglected grid subdomains with very small concentration gradients (zero suppression). The resulting software, MODCALIF, is a three-dimensional, modularly constructed FORTRAN code. For convenience, the package names used by the well-known MODFLOW and MT3D computer programs are adopted, and the same input file structure and format is used, but the program presented here is separate and independent. Also, MODCALIF includes algorithms for variable density modeling and model calibration. The method is tested by comparison with an analytical solution, and illustrated by means of a two-dimensional theoretical example and three-dimensional simulations of the variable-density Cape Cod and SALTPOOL experiments. Crossing from fine to coarse grid produces numerical dispersion when the whole subdomain of interest is refined; however, we show that accurate solutions can be obtained using a fraction of the

  17. On Accuracy of Adaptive Grid Methods for Captured Shocks

    NASA Technical Reports Server (NTRS)

    Yamaleev, Nail K.; Carpenter, Mark H.

    2002-01-01

    The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.

  18. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  19. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  20. Dynamic mesh adaption for triangular and tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.

  1. Techniques for grid manipulation and adaptation. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.

    1992-01-01

    Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.

  2. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  3. A time-accurate multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Jespersen, D. C.

    1985-01-01

    A time-accurate multiple-grid algorithm is described. The algorithm allows one to take much larger time steps with an explicit time-marching scheme than would otherwise be the case. Sample calculations of a scalar advection equation and the Euler equations for an oscillating airfoil are shown. For the oscillating airfoil, time steps an order of magnitude larger than the single-grid algorithm are possible.

  4. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  5. Adaptive Multigrid Algorithm for the Lattice Wilson-Dirac Operator

    SciTech Connect

    Babich, R.; Brower, R. C.; Rebbi, C.; Brannick, J.; Clark, M. A.; Manteuffel, T. A.; McCormick, S. F.; Osborn, J. C.

    2010-11-12

    We present an adaptive multigrid solver for application to the non-Hermitian Wilson-Dirac system of QCD. The key components leading to the success of our proposed algorithm are the use of an adaptive projection onto coarse grids that preserves the near null space of the system matrix together with a simplified form of the correction based on the so-called {gamma}{sub 5}-Hermitian symmetry of the Dirac operator. We demonstrate that the algorithm nearly eliminates critical slowing down in the chiral limit and that it has weak dependence on the lattice volume.

  6. Adaptive multigrid algorithm for the lattice Wilson-Dirac operator.

    PubMed

    Babich, R; Brannick, J; Brower, R C; Clark, M A; Manteuffel, T A; McCormick, S F; Osborn, J C; Rebbi, C

    2010-11-12

    We present an adaptive multigrid solver for application to the non-Hermitian Wilson-Dirac system of QCD. The key components leading to the success of our proposed algorithm are the use of an adaptive projection onto coarse grids that preserves the near null space of the system matrix together with a simplified form of the correction based on the so-called γ5-Hermitian symmetry of the Dirac operator. We demonstrate that the algorithm nearly eliminates critical slowing down in the chiral limit and that it has weak dependence on the lattice volume. PMID:21231217

  7. Adaptively-refined overlapping grids for the numerical solution of systems of hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.

    1995-01-01

    Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.

  8. Moving and adaptive grid methods for compressible flows

    NASA Technical Reports Server (NTRS)

    Trepanier, Jean-Yves; Camarero, Ricardo

    1995-01-01

    This paper describes adaptive grid methods developed specifically for compressible flow computations. The basic flow solver is a finite-volume implementation of Roe's flux difference splitting scheme or arbitrarily moving unstructured triangular meshes. The grid adaptation is performed according to geometric and flow requirements. Some results are included to illustrate the potential of the methodology.

  9. Genetic-Annealing Algorithm in Grid Environment for Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Cruz-Chávez, Marco Antonio; Rodríguez-León, Abelardo; Ávila-Melgar, Erika Yesenia; Juárez-Pérez, Fredy; Cruz-Rosales, Martín H.; Rivera-López, Rafael

    This paper presents a parallel hybrid evolutionary algorithm executed in a grid environment. The algorithm executes local searches using simulated annealing within a Genetic Algorithm to solve the job shop scheduling problem. Experimental results of the algorithm obtained in the "Tarantula MiniGrid" are shown. Tarantula was implemented by linking two clusters from different geographic locations in Mexico (Morelos-Veracruz). The technique used to link the two clusters and configure the Tarantula MiniGrid is described. The effects of latency in communication between the two clusters are discussed. It is shown that the evolutionary algorithm presented is more efficient working in Grid environments because it can carry out major exploration and exploitation of the solution space.

  10. A multigrid method for steady Euler equations on unstructured adaptive grids

    NASA Technical Reports Server (NTRS)

    Riemslagh, Kris; Dick, Erik

    1993-01-01

    A flux-difference splitting type algorithm is formulated for the steady Euler equations on unstructured grids. The polynomial flux-difference splitting technique is used. A vertex-centered finite volume method is employed on a triangular mesh. The multigrid method is in defect-correction form. A relaxation procedure with a first order accurate inner iteration and a second-order correction performed only on the finest grid, is used. A multi-stage Jacobi relaxation method is employed as a smoother. Since the grid is unstructured a Jacobi type is chosen. The multi-staging is necessary to provide sufficient smoothing properties. The domain is discretized using a Delaunay triangular mesh generator. Three grids with more or less uniform distribution of nodes but with different resolution are generated by successive refinement of the coarsest grid. Nodes of coarser grids appear in the finer grids. The multigrid method is started on these grids. As soon as the residual drops below a threshold value, an adaptive refinement is started. The solution on the adaptively refined grid is accelerated by a multigrid procedure. The coarser multigrid grids are generated by successive coarsening through point removement. The adaption cycle is repeated a few times. Results are given for the transonic flow over a NACA-0012 airfoil.

  11. Adaptive gridding strategies for Free-Lagrangian calculations of low speed flows

    NASA Astrophysics Data System (ADS)

    Fritts, Martin J.

    1988-01-01

    Free-Lagrangian methods have been employed in two-dimensional simulations of the long-term evolution of fluid instabilities for low speed flows. For example, calculations of the Rayleigh-Taylor instability have proceeded through the inversion and mixing of two fluid layers and simulations of droplet deformations have continued well beyond droplet shattering. The freedom to choose grid connections permits several important benefits for these calculations. 1. Mass conservation is enforced for all individual fluid elements. 2. Vertex movement is always Lagrangian. 3. Grid adjustments can be made automatically, with no user intervention. 4. Grid connections may be selected to ensure accuracy in the difference equations. 5. Adaptive gridding schemes are local, adding and deleting vertices as dictated by local accuracy estimators. 6. Any geometric configuration may be easily gridded, for any vertex distribution on the boundaries or in the interior of the fluids. This paper will review some two-dimensional results, with the emphasis on the adaptive gridding algorithms and the accuracy of the resultant difference templates for the mathematical operators. The relation of the triangular mesh to the Voronoi mesh will be explored, particularly for the case when they are dual meshes. Three-dimensional algorithms for adaptive gridding will be presented which are exact analogues to the two-dimensional case. Gridding efficiencies will be discussed for several schemes.

  12. Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement

    SciTech Connect

    Anninos, P; Fragile, P C; Salmonson, J D

    2005-05-06

    A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.

  13. Stability of the DSI algorithm on a chevron grid

    SciTech Connect

    Brandon, S.T.; Rambo, P.W.

    1995-06-01

    The development of time domain electromagnetic solvers for nonorthogonal grids is an area of current research interest, stemming from the need to simulate complex geometries in a wide variety of applications. A notable example is the discrete surface integral (DSI) algorithm which solves the Maxwell curl equations in the time domain using a 3d, unstructured, mixed-polyhedral grid. Although this method is an extension of the time proven Yee algorithm, little is known about the numerical properties of the method when discretized on these more general grids. Dispersion relations for the DSI algorithm can be derived using 2d idealized grids, such as the skewed mesh analysis done by Ray and Rambo for both triangles and quadrilaterals. The present work applies the same techniques used for the skewed mesh analysis to another idealized, but nonorthogonal, 2d grid.

  14. Algorithms and data structures for adaptive multigrid elliptic solvers

    NASA Technical Reports Server (NTRS)

    Vanrosendale, J.

    1983-01-01

    Adaptive refinement and the complicated data structures required to support it are discussed. These data structures must be carefully tuned, especially in three dimensions where the time and storage requirements of algorithms are crucial. Another major issue is grid generation. The options available seem to be curvilinear fitted grids, constructed on iterative graphics systems, and unfitted Cartesian grids, which can be constructed automatically. On several grounds, including storage requirements, the second option seems preferrable for the well behaved scalar elliptic problems considered here. A variety of techniques for treatment of boundary conditions on such grids are reviewed. A new approach, which may overcome some of the difficulties encountered with previous approaches, is also presented.

  15. Dynamic multi DAG scheduling algorithm for optical grid environment

    NASA Astrophysics Data System (ADS)

    Zhu, Liying; Sun, Zhenyu; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2007-11-01

    Facing the evolvement of the Optical Grid technology, dynamic task scheduling can largely improve the efficiency of the Grid environment under the real circumstances. We propose a Serve On Time (SOT) algorithm - based on the idea of combining all the dynamic multi tasks so that all the tasks will obtain the rights to be served as soon as possible. We then introduce the basic First Come First Serve (FCFS) algorithm. A simulation will show the advantage of SOT.

  16. Extending the MODPATH Algorithm to Rectangular Unstructured Grids.

    PubMed

    Pollock, David W

    2016-01-01

    The recent release of MODFLOW-USG, which allows model grids to have irregular, unstructured connections, requires a modification of the particle-tracking algorithm used by MODPATH. This paper describes a modification of the semi-analytical particle-tracking algorithm used by MODPATH that allows it to be extended to rectangular-based unstructured grids by dividing grid cells with multi-cell face connections into sub-cells. The new method will be incorporated in the next version of MODPATH which is currently under development. PMID:25754305

  17. Extending the MODPATH algorithm to rectangular unstructured grids

    USGS Publications Warehouse

    Pollock, David W.

    2016-01-01

    The recent release of MODFLOW-USG, which allows model grids to have irregular, unstructured connections, requires a modification of the particle-tracking algorithm used by MODPATH. This paper describes a modification of the semi-analytical particle-tracking algorithm used by MODPATH that allows it to be extended to rectangular-based unstructured grids by dividing grid cells with multi-cell face connections into sub-cells. The new method will be incorporated in the next version of MODPATH which is currently under development.

  18. Development of a dynamically adaptive grid method for multidimensional problems

    NASA Astrophysics Data System (ADS)

    Holcomb, J. E.; Hindman, R. G.

    1984-06-01

    An approach to solution adaptive grid generation for use with finite difference techniques, previously demonstrated on model problems in one space dimension, has been extended to multidimensional problems. The method is based on the popular elliptic steady grid generators, but is 'dynamically' adaptive in the sense that a grid is maintained at all times satisfying the steady grid law driven by a solution-dependent source term. Testing has been carried out on Burgers' equation in one and two space dimensions. Results appear encouraging both for inviscid wave propagation cases and viscous boundary layer cases, suggesting that application to practical flow problems is now possible. In the course of the work, obstacles relating to grid correction, smoothing of the solution, and elliptic equation solvers have been largely overcome. Concern remains, however, about grid skewness, boundary layer resolution and the need for implicit integration methods. Also, the method in 3-D is expected to be very demanding of computer resources.

  19. Workshop on adaptive grid methods for fusion plasmas

    SciTech Connect

    Wiley, J.C.

    1995-07-01

    The author describes a general `hp` finite element method with adaptive grids. The code was based on the work of Oden, et al. The term `hp` refers to the method of spatial refinement (h), in conjunction with the order of polynomials used as a part of the finite element discretization (p). This finite element code seems to handle well the different mesh grid sizes occuring between abuted grids with different resolutions.

  20. QPSO-Based Adaptive DNA Computing Algorithm

    PubMed Central

    Karakose, Mehmet; Cigdem, Ugur

    2013-01-01

    DNA (deoxyribonucleic acid) computing that is a new computation model based on DNA molecules for information storage has been increasingly used for optimization and data analysis in recent years. However, DNA computing algorithm has some limitations in terms of convergence speed, adaptability, and effectiveness. In this paper, a new approach for improvement of DNA computing is proposed. This new approach aims to perform DNA computing algorithm with adaptive parameters towards the desired goal using quantum-behaved particle swarm optimization (QPSO). Some contributions provided by the proposed QPSO based on adaptive DNA computing algorithm are as follows: (1) parameters of population size, crossover rate, maximum number of operations, enzyme and virus mutation rate, and fitness function of DNA computing algorithm are simultaneously tuned for adaptive process, (2) adaptive algorithm is performed using QPSO algorithm for goal-driven progress, faster operation, and flexibility in data, and (3) numerical realization of DNA computing algorithm with proposed approach is implemented in system identification. Two experiments with different systems were carried out to evaluate the performance of the proposed approach with comparative results. Experimental results obtained with Matlab and FPGA demonstrate ability to provide effective optimization, considerable convergence speed, and high accuracy according to DNA computing algorithm. PMID:23935409

  1. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  2. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  3. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  4. Adaptive sensor fusion using genetic algorithms

    SciTech Connect

    Fitzgerald, D.S.; Adams, D.G.

    1994-08-01

    Past attempts at sensor fusion have used some form of Boolean logic to combine the sensor information. As an alteniative, an adaptive ``fuzzy`` sensor fusion technique is described in this paper. This technique exploits the robust capabilities of fuzzy logic in the decision process as well as the optimization features of the genetic algorithm. This paper presents a brief background on fuzzy logic and genetic algorithms and how they are used in an online implementation of adaptive sensor fusion.

  5. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  6. Volumetric Rendering of Geophysical Data on Adaptive Wavelet Grid

    NASA Astrophysics Data System (ADS)

    Vezolainen, A.; Erlebacher, G.; Vasilyev, O.; Yuen, D. A.

    2005-12-01

    Numerical modeling of geological phenomena frequently involves processes across a wide range of spatial and temporal scales. In the last several years, transport phenomena governed by the Navier-Stokes equations have been simulated in wavelet space using second generation wavelets [1], and most recently on fully adaptive meshes. Our objective is to visualize this time-dependent data using volume rendering while capitalizing on the available sparse data representation. We present a technique for volumetric ray casting of multi-scale datasets in wavelet space. Rather of working with the wavelets at the finest possible resolution, we perform a partial inverse wavelet transform as a preprocessing step to obtain scaling functions on a uniform grid at a user-prescribed resolution. As a result, a function in physical space is represented by a superposition of scaling functions on a coarse regular grid and wavelets on an adaptive mesh. An efficient and accurate ray casting algorithm is based just on these scaling functions. Additional detail is added during the ray tracing by taking an appropriate number of wavelets into account based on support overlap with the interpolation point, wavelet amplitude, and other characteristics, such as opacity accumulation (front to back ordering) and deviation from frontal viewing direction. Strategies for hardware implementation will be presented if available, inspired by the work in [2]. We will pressent error measures as a function of the number of scaling and wavelet functions used for interpolation. Data from mantle convection will be used to illustrate the method. [1] Vasilyev, O.V. and Bowman, C., Second Generation Wavelet Collocation Method for the Solution of Partial Differential Equations. J. Comp. Phys., 165, pp. 660-693, 2000. [2] Guthe, S., Wand, M., Gonser, J., and Straßer, W. Interactive rendering of large volume data sets. In Proceedings of the Conference on Visualization '02 (Boston, Massachusetts, October 27 - November

  7. Topology and grid adaption for high-speed flow computations

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid S.; Tiwari, Surendra N.

    1989-01-01

    This study investigates the effects of grid topology and grid adaptation on numerical solutions of the Navier-Stokes equations. In the first part of this study, a general procedure is presented for computation of high-speed flow over complex three-dimensional configurations. The flow field is simulated on the surface of a Butler wing in a uniform stream. Results are presented for Mach number 3.5 and a Reynolds number of 2,000,000. The O-type and H-type grids have been used for this study, and the results are compared together and with other theoretical and experimental results. The results demonstrate that while the H-type grid is suitable for the leading and trailing edges, a more accurate solution can be obtained for the middle part of the wing with an O-type grid. In the second part of this study, methods of grid adaption are reviewed and a method is developed with the capability of adapting to several variables. This method is based on a variational approach and is an algebraic method. Also, the method has been formulated in such a way that there is no need for any matrix inversion. This method is used in conjunction with the calculation of hypersonic flow over a blunt-nose body. A movie has been produced which shows simultaneously the transient behavior of the solution and the grid adaption.

  8. Application of a modified gradient lease squares algorithm to an adaptive, actively quenched, sound field system

    SciTech Connect

    Belyakov, A.A.; Mal`tsev, A.A.; Medvedev, S.Yu.

    1995-04-01

    A modified least squares algorithm, preventing the overflow of the discharge grid of weight coefficients of an adaptive transverse filter and guaranteeing stable system operation, is suggested for the tuning of an adaptive system of an actively quenched sound field. Experimental results are provided for an adaptive filter with a modified algorithm in a system of several harmonic components of an actively quenched sound field.

  9. Adaptivity and smart algorithms for fluid-structure interaction

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1990-01-01

    This paper reviews new approaches in CFD which have the potential for significantly increasing current capabilities of modeling complex flow phenomena and of treating difficult problems in fluid-structure interaction. These approaches are based on the notions of adaptive methods and smart algorithms, which use instantaneous measures of the quality and other features of the numerical flowfields as a basis for making changes in the structure of the computational grid and of algorithms designed to function on the grid. The application of these new techniques to several problem classes are addressed, including problems with moving boundaries, fluid-structure interaction in high-speed turbine flows, flow in domains with receding boundaries, and related problems.

  10. Eddy-current NDE inverse problem with sparse grid algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Liming; Sabbagh, Harold A.; Sabbagh, Elias H.; Murphy, R. Kim; Bernacchi, William; Aldrin, John C.; Forsyth, David; Lindgren, Eric

    2016-02-01

    In model-based inverse problems, the unknown parameters (such as length, width, depth) need to be estimated. When the unknown parameters are few, the conventional mathematical methods are suitable. But the increasing number of unknown parameters will make the computation become heavy. To reduce the burden of computation, the sparse grid algorithm was used in our work. As a result, we obtain a powerful interpolation method that requires significantly fewer support nodes than conventional interpolation on a full grid.

  11. Parallel grid generation algorithm for distributed memory computers

    NASA Technical Reports Server (NTRS)

    Moitra, Stuti; Moitra, Anutosh

    1994-01-01

    A parallel grid-generation algorithm and its implementation on the Intel iPSC/860 computer are described. The grid-generation scheme is based on an algebraic formulation of homotopic relations. Methods for utilizing the inherent parallelism of the grid-generation scheme are described, and implementation of multiple levELs of parallelism on multiple instruction multiple data machines are indicated. The algorithm is capable of providing near orthogonality and spacing control at solid boundaries while requiring minimal interprocessor communications. Results obtained on the Intel hypercube for a blended wing-body configuration are used to demonstrate the effectiveness of the algorithm. Fortran implementations bAsed on the native programming model of the iPSC/860 computer and the Express system of software tools are reported. Computational gains in execution time speed-up ratios are given.

  12. Adaptive grid generation in a patient-specific cerebral aneurysm.

    PubMed

    Hodis, Simona; Kallmes, David F; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  13. Adaptive grid generation in a patient-specific cerebral aneurysm

    NASA Astrophysics Data System (ADS)

    Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan

    2013-11-01

    Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce

  14. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  15. A new procedure for dynamic adaption of three-dimensional unstructured grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1993-01-01

    A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.

  16. Self-adaptive parameters in genetic algorithms

    NASA Astrophysics Data System (ADS)

    Pellerin, Eric; Pigeon, Luc; Delisle, Sylvain

    2004-04-01

    Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm"s parameters can be encoded within the chromosome of each individual. The parameters" values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

  17. Scheduling in Sensor Grid Middleware for Telemedicine Using ABC Algorithm

    PubMed Central

    Vigneswari, T.; Mohamed, M. A. Maluk

    2014-01-01

    Advances in microelectromechanical systems (MEMS) and nanotechnology have enabled design of low power wireless sensor nodes capable of sensing different vital signs in our body. These nodes can communicate with each other to aggregate data and transmit vital parameters to a base station (BS). The data collected in the base station can be used to monitor health in real time. The patient wearing sensors may be mobile leading to aggregation of data from different BS for processing. Processing real time data is compute-intensive and telemedicine facilities may not have appropriate hardware to process the real time data effectively. To overcome this, sensor grid has been proposed in literature wherein sensor data is integrated to the grid for processing. This work proposes a scheduling algorithm to efficiently process telemedicine data in the grid. The proposed algorithm uses the popular swarm intelligence algorithm for scheduling to overcome the NP complete problem of grid scheduling. Results compared with other heuristic scheduling algorithms show the effectiveness of the proposed algorithm. PMID:25548557

  18. Methods for prismatic/tetrahedral grid generation and adaptation

    NASA Astrophysics Data System (ADS)

    Kallinderis, Y.

    1995-10-01

    The present work involves generation of hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is a method for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A High Speed Civil Transport (HSCT) type of aircraft geometry is considered. The generated hybrid grid required only 170 K tetrahedra instead of an estimated two million had a tetrahedral mesh been used in the prisms region as well. A solution adaptive scheme for viscous computations on hybrid grids is also presented. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples 3-D, isotropic division of tetrahedra and 2-D, directional division of prisms.

  19. Efficient Unstructured Grid Adaptation Methods for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Carter, Melissa B.; Deere, Karen A.; Waithe, Kenrick A.

    2008-01-01

    This paper examines the use of two grid adaptation methods to improve the accuracy of the near-to-mid field pressure signature prediction of supersonic aircraft computed using the USM3D unstructured grid flow solver. The first method (ADV) is an interactive adaptation process that uses grid movement rather than enrichment to more accurately resolve the expansion and compression waves. The second method (SSGRID) uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid with the pressure waves and reduce the cell count required to achieve an accurate signature prediction at a given distance from the vehicle. Both methods initially create negative volume cells that are repaired in a module in the ADV code. While both approaches provide significant improvements in the near field signature (< 3 body lengths) relative to a baseline grid without increasing the number of grid points, only the SSGRID approach allows the details of the signature to be accurately computed at mid-field distances (3-10 body lengths) for direct use with mid-field-to-ground boom propagation codes.

  20. Computation of Transient Nonlinear Ship Waves Using AN Adaptive Algorithm

    NASA Astrophysics Data System (ADS)

    Çelebi, M. S.

    2000-04-01

    An indirect boundary integral method is used to solve transient nonlinear ship wave problems. A resulting mixed boundary value problem is solved at each time-step using a mixed Eulerian- Lagrangian time integration technique. Two dynamic node allocation techniques, which basically distribute nodes on an ever changing body surface, are presented. Both two-sided hyperbolic tangent and variational grid generation algorithms are developed and compared on station curves. A ship hull form is generated in parametric space using a B-spline surface representation. Two-sided hyperbolic tangent and variational adaptive curve grid-generation methods are then applied on the hull station curves to generate effective node placement. The numerical algorithm, in the first method, used two stretching parameters. In the second method, a conservative form of the parametric variational Euler-Lagrange equations is used the perform an adaptive gridding on each station. The resulting unsymmetrical influence coefficient matrix is solved using both a restarted version of GMRES based on the modified Gram-Schmidt procedure and a line Jacobi method based on LU decomposition. The convergence rates of both matrix iteration techniques are improved with specially devised preconditioners. Numerical examples of node placements on typical hull cross-sections using both techniques are discussed and fully nonlinear ship wave patterns and wave resistance computations are presented.

  1. Variational method for adaptive grid generation

    SciTech Connect

    Brackbill, J.U.

    1983-01-01

    A variational method for generating adaptive meshes is described. Functionals measuring smoothness, skewness, orientation, and the Jacobian are minimized to generate a mapping from a rectilinear domain in natural coordinate to an arbitrary domain in physical coordinates. From the mapping, a mesh is easily constructed. In using the method to adaptively zone computational problems, as few as one third the number of mesh points are required in each coordinate direction compared with a uniformly zoned mesh.

  2. ICASE/LaRC Workshop on Adaptive Grid Methods

    NASA Technical Reports Server (NTRS)

    South, Jerry C., Jr. (Editor); Thomas, James L. (Editor); Vanrosendale, John (Editor)

    1995-01-01

    Solution-adaptive grid techniques are essential to the attainment of practical, user friendly, computational fluid dynamics (CFD) applications. In this three-day workshop, experts gathered together to describe state-of-the-art methods in solution-adaptive grid refinement, analysis, and implementation; to assess the current practice; and to discuss future needs and directions for research. This was accomplished through a series of invited and contributed papers. The workshop focused on a set of two-dimensional test cases designed by the organizers to aid in assessing the current state of development of adaptive grid technology. In addition, a panel of experts from universities, industry, and government research laboratories discussed their views of needs and future directions in this field.

  3. Adaptive Cuckoo Search Algorithm for Unconstrained Optimization

    PubMed Central

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  4. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  5. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method. PMID:19840985

  6. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  7. A new algorithm for grid-based hydrologic analysis by incorporating stormwater infrastructure

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Yi, Huiuk; Park, Hyeong-Dong

    2011-08-01

    We developed a new algorithm, the Adaptive Stormwater Infrastructure (ASI) algorithm, to incorporate ancillary data sets related to stormwater infrastructure into the grid-based hydrologic analysis. The algorithm simultaneously considers the effects of the surface stormwater collector network (e.g., diversions, roadside ditches, and canals) and underground stormwater conveyance systems (e.g., waterway tunnels, collector pipes, and culverts). The surface drainage flows controlled by the surface runoff collector network are superimposed onto the flow directions derived from a DEM. After examining the connections between inlets and outfalls in the underground stormwater conveyance system, the flow accumulation and delineation of watersheds are calculated based on recursive computations. Application of the algorithm to the Sangdong tailings dam in Korea revealed superior performance to that of a conventional D8 single-flow algorithm in terms of providing reasonable hydrologic information on watersheds with stormwater infrastructure.

  8. Adaptive hybrid prismatic-tetrahedral grids for viscous flows

    NASA Astrophysics Data System (ADS)

    Kallinderis, Yannis; Khawaja, Aly; McMorris, Harlan

    1995-03-01

    The paper presents generation of adaptive hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is an Automatic Receding Method (ARM) for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples division of tetrahedra, as well as 2-D directional division of prisms.

  9. Adaptive hybrid prismatic-tetrahedral grids for viscous flows

    NASA Technical Reports Server (NTRS)

    Kallinderis, Yannis; Khawaja, Aly; Mcmorris, Harlan

    1995-01-01

    The paper presents generation of adaptive hybrid prismatic/tetrahedral grids for complex 3-D geometries including multi-body domains. The prisms cover the region close to each body's surface, while tetrahedra are created elsewhere. Two developments are presented for hybrid grid generation around complex 3-D geometries. The first is a new octree/advancing front type of method for generation of the tetrahedra of the hybrid mesh. The main feature of the present advancing front tetrahedra generator that is different from previous such methods is that it does not require the creation of a background mesh by the user for the determination of the grid-spacing and stretching parameters. These are determined via an automatically generated octree. The second development is an Automatic Receding Method (ARM) for treating the narrow gaps in between different bodies in a multiply-connected domain. This method is applied to a two-element wing case. A hybrid grid adaptation scheme that employs both h-refinement and redistribution strategies is developed to provide optimum meshes for viscous flow computations. Grid refinement is a dual adaptation scheme that couples division of tetrahedra, as well as 2-D directional division of prisms.

  10. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  11. A novel hyperbolic grid generation procedure with inherent adaptive dissipation

    SciTech Connect

    Tai, C.H.; Yin, S.L.; Soong, C.Y.

    1995-01-01

    This paper reports a novel hyperbolic grid-generation with an inherent adaptive dissipation (HGAD), which is capable of improving the oscillation and overlapping of grid lines. In the present work upwinding differencing is applied to discretize the hyperbolic system and, thereby, to develop the adaptive dissipation coefficient. Complex configurations with the features of geometric discontinuity, exceptional concavity and convexity are used as the test cases for comparison of the present HGAD procedure with the conventional hyerbolic and elliptic ones. The results reveal that the HGAD method is superior in orthogonality and smoothness of the grid system. In addition, the computational efficiency of the flow solver may be improved by using the present HGAD procedure. 15 refs., 8 figs.

  12. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  13. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  14. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  15. The emergence of grid cells: Intelligent design or just adaptation?

    PubMed

    Kropff, Emilio; Treves, Alessandro

    2008-01-01

    Individual medial entorhinal cortex (mEC) 'grid' cells provide a representation of space that appears to be essentially invariant across environments, modulo simple transformations, in contrast to multiple, rapidly acquired hippocampal maps; it may therefore be established gradually during rodent development. We explore with a simplified mathematical model the possibility that the self-organization of multiple grid fields into a triangular grid pattern may be a single-cell process, driven by firing rate adaptation and slowly varying spatial inputs. A simple analytical derivation indicates that triangular grids are favored asymptotic states of the self-organizing system, and computer simulations confirm that such states are indeed reached during a model learning process, provided it is sufficiently slow to effectively average out fluctuations. The interactions among local ensembles of grid units serve solely to stabilize a common grid orientation. Spatial information, in the real mEC network, may be provided by any combination of feedforward cortical afferents and feedback hippocampal projections from place cells, since either input alone is likely sufficient to yield grid fields. PMID:19021261

  16. Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.

  17. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2003-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  18. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  19. An adaptive guidance algorithm for aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Bradt, J. E.; Hardtla, J. W.; Cramer, E. J.

    The specifications for proposed space transportation systems are placing more emphasis on developing reusable avionics subsystems which have the capability to respond to vehicle evolution and diverse missions while at the same time reducing the cost of ground support for mission planning, contingency response and verification and validation. An innovative approach to meeting these goals is to specify the guidance problem as a multi-point boundary value problen and solve that problem using modern control theory and nonlinear constrained optimization techniques. This approach has been implemented as Gamma Guidance (Hardtla, 1978) and has been successfully flown in the Inertial Upper Stage. The adaptive guidance algorithm described in this paper is a generalized formulation of Gamma Guidance. The basic equations are presented and then applied to four diverse aerospace vehicles to demonstrate the feasibility of using a reusable, explicit, adaptive guidance algorithm for diverse applications and vehicles.

  20. Optimal file-bundle caching algorithms for data-grids

    SciTech Connect

    Otoo, Ekow; Rotem, Doron; Romosan, Alexandru

    2004-04-24

    The file-bundle caching problem arises frequently in scientific applications where jobs need to process several files simultaneously. Consider a host system in a data-grid that maintains a staging disk or disk cache for servicing jobs of file requests. In this environment, a job can only be serviced if all its file requests are present in the disk cache. Files must be admitted into the cache or replaced in sets of file-bundles, i.e. the set of files that must all be processed simultaneously. In this paper we show that traditional caching algorithms based on file popularity measures do not perform well in such caching environments since they are not sensitive to the inter-file dependencies and may hold in the cache non-relevant combinations of files. We present and analyze a new caching algorithm for maximizing the throughput of jobs and minimizing data replacement costs to such data-grid hosts. We tested the new algorithm using a disk cache simulation model under a wide range of conditions such as file request distributions, relative cache size, file size distribution, etc. In all these cases, the results show significant improvement as compared with traditional caching algorithms.

  1. Digital breast tomosynthesis reconstruction with an adaptive voxel grid

    NASA Astrophysics Data System (ADS)

    Claus, Bernhard; Chan, Heang-Ping

    2014-03-01

    In digital breast tomosynthesis (DBT) volume datasets are typically reconstructed with an anisotropic voxel size, where the in-plane voxel size usually reflects the detector pixel size (e.g., 0.1 mm), and the slice separation is generally between 0.5-1.0 mm. Increasing the tomographic angle is expected to give better 3D image quality; however, the slice spacing in the reconstruction should be reduced, otherwise one may risk losing fine-scale image detail (e.g., small microcalcifications). An alternative strategy consists of reconstructing on an adaptive voxel grid, where the voxel height at each location is adapted based on the backprojected data at this location, with the goal to improve image quality for microcalcifications. In this paper we present an approach for generating such an adaptive voxel grid. This approach is based on an initial reconstruction step that is performed at a finer slice-spacing combined with a selection of an "optimal" height for each voxel. This initial step is followed by a (potentially iterative) reconstruction acting now on the adaptive grid only.

  2. Turbo LMS algorithm: supercharger meets adaptive filter

    NASA Astrophysics Data System (ADS)

    Meyer-Baese, Uwe

    2006-04-01

    Adaptive digital filters (ADFs) are, in general, the most sophisticated and resource intensive components of modern digital signal processing (DSP) and communication systems. Improvements in performance or the complexity of ADFs can have a significant impact on the overall size, speed, and power properties of a complete system. The least mean square (LMS) algorithm is a popular algorithm for coefficient adaptation in ADF because it is robust, easy to implement, and a close approximation to the optimal Wiener-Hopf least mean square solution. The main weakness of the LMS algorithm is the slow convergence, especially for non Markov-1 colored noise input signals with high eigenvalue ratios (EVRs). Since its introduction in 1993, the turbo (supercharge) principle has been successfully applied in error correction decoding and has become very popular because it reaches the theoretical limits of communication capacity predicted 5 decades ago by Shannon. The turbo principle applied to LMS ADF is analogous to the turbo principle used for error correction decoders: First, an "interleaver" is used to minimize crosscorrelation, secondly, an iterative improvement which uses the same data set several times is implemented using the standard LMS algorithm. Results for 6 different interleaver schemes for EVR in the range 1-100 are presented.

  3. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  4. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  5. New iterative gridding algorithm using conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Jiang, Xuguang; Thedens, Daniel

    2004-05-01

    Non-uniformly sampled data in MRI applications must be interpolated onto a regular Cartesian grid to perform fast image reconstruction using FFT. The conventional method for this is gridding, which requires a density compensation function (DCF). The calculation of DCF may be time-consuming, ambiguously defined, and may not be always reusable due to changes in k-space trajectories. A recently proposed reconstruction method that eliminates the requirement of DCF is block uniform resampling (BURS) which uses singular value decomposition (SVD). However, the SVD is still computationally intensive. In this work, we present a modified BURS algorithm using conjugate gradient method (CGM) in place of direct SVD calculation. Calculation of a block of grid point values in each iteration further reduces the computational load. The new method reduces the calculation complexity while maintaining a high-quality reconstruction result. For an n-by-n matrix, the time complexity per iteration is reduced from O(n*n*n) in SVD to O(n*n) in CGM. The time can be further reduced when we stop the iteration in CGM earlier according to the norm of the residual vector. Using this method, the quality of the reconstructed image improves compared to regularized BURS. The reduced time complexity and improved reconstruction result make the new algorithm promising in dealing with large-sized images and 3D images.

  6. A Cell-Centered Multigrid Algorithm for All Grid Sizes

    NASA Technical Reports Server (NTRS)

    Gjesdal, Thor

    1996-01-01

    Multigrid methods are optimal; that is, their rate of convergence is independent of the number of grid points, because they use a nested sequence of coarse grids to represent different scales of the solution. This nesting does, however, usually lead to certain restrictions of the permissible size of the discretised problem. In cases where the modeler is free to specify the whole problem, such constraints are of little importance because they can be taken into consideration from the outset. We consider the situation in which there are other competing constraints on the resolution. These restrictions may stem from the physical problem (e.g., if the discretised operator contains experimental data measured on a fixed grid) or from the need to avoid limitations set by the hardware. In this paper we discuss a modification to the cell-centered multigrid algorithm, so that it can be used br problems with any resolution. We discuss in particular a coarsening strategy and choice of intergrid transfer operators that can handle grids with both an even or odd number of cells. The method is described and applied to linear equations obtained by discretization of two- and three-dimensional second-order elliptic PDEs.

  7. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1993-03-01

    Path planning has to be fast to support real-time robot programming. Unfortunately, current planning techniques are still too slow to be effective, as they often require several minutes, if not hours of computation. To alleviate this problem, we present a learning algorithm that uses past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions to difficult tasks. From these solutions, an evolving sparse network of useful subgoals is learned to support faster planning. The algorithm is suitable for both stationary and incrementally-changing environments. To analyze our algorithm, we use a previously developed stochastic model that quantifies experience utility. Using this model, we characterize the situations in which the adaptive planner is useful, and provide quantitative bounds to predict its behavior. The results are demonstrated with problems in manipulator planning. Our algorithm and analysis are sufficiently general that they may also be applied to task planning or other planning domains in which experience is useful.

  8. Unstructured Adaptive Grid Computations on an Array of SMPs

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Pramanick, Ira; Sohn, Andrew; Simon, Horst D.

    1996-01-01

    Dynamic load balancing is necessary for parallel adaptive methods to solve unsteady CFD problems on unstructured grids. We have presented such a dynamic load balancing framework called JOVE, in this paper. Results on a four-POWERnode POWER CHALLENGEarray demonstrated that load balancing gives significant performance improvements over no load balancing for such adaptive computations. The parallel speedup of JOVE, implemented using MPI on the POWER CHALLENCEarray, was significant, being as high as 31 for 32 processors. An implementation of JOVE that exploits 'an array of SMPS' architecture was also studied; this hybrid JOVE outperformed flat JOVE by up to 28% on the meshes and adaption models tested. With large, realistic meshes and actual flow-solver and adaption phases incorporated into JOVE, hybrid JOVE can be expected to yield significant advantage over flat JOVE, especially as the number of processors is increased, thus demonstrating the scalability of an array of SMPs architecture.

  9. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  10. Large-Scale Liquid Simulation on Adaptive Hexahedral Grids.

    PubMed

    Ferstl, Florian; Westermann, Rudiger; Dick, Christian

    2014-10-01

    Regular grids are attractive for numerical fluid simulations because they give rise to efficient computational kernels. However, for simulating high resolution effects in complicated domains they are only of limited suitability due to memory constraints. In this paper we present a method for liquid simulation on an adaptive octree grid using a hexahedral finite element discretization, which reduces memory requirements by coarsening the elements in the interior of the liquid body. To impose free surface boundary conditions with second order accuracy, we incorporate a particular class of Nitsche methods enforcing the Dirichlet boundary conditions for the pressure in a variational sense. We then show how to construct a multigrid hierarchy from the adaptive octree grid, so that a time efficient geometric multigrid solver can be used. To improve solver convergence, we propose a special treatment of liquid boundaries via composite finite elements at coarser scales. We demonstrate the effectiveness of our method for liquid simulations that would require hundreds of millions of simulation elements in a non-adaptive regime. PMID:26357387

  11. Adaptive Harmonic Detection Control of Grid Interfaced Solar Photovoltaic Energy System with Power Quality Improvement

    NASA Astrophysics Data System (ADS)

    Singh, B.; Goel, S.

    2015-03-01

    This paper presents a grid interfaced solar photovoltaic (SPV) energy system with a novel adaptive harmonic detection control for power quality improvement at ac mains under balanced as well as unbalanced and distorted supply conditions. The SPV energy system is capable of compensation of linear and nonlinear loads with the objectives of load balancing, harmonics elimination, power factor correction and terminal voltage regulation. The proposed control increases the utilization of PV infrastructure and brings down its effective cost due to its other benefits. The adaptive harmonic detection control algorithm is used to detect the fundamental active power component of load currents which are subsequently used for reference source currents estimation. An instantaneous symmetrical component theory is used to obtain instantaneous positive sequence point of common coupling (PCC) voltages which are used to derive inphase and quadrature phase voltage templates. The proposed grid interfaced PV energy system is modelled and simulated in MATLAB Simulink and its performance is verified under various operating conditions.

  12. Adaptive Trajectory Prediction Algorithm for Climbing Flights

    NASA Technical Reports Server (NTRS)

    Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz

    2012-01-01

    Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.

  13. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  14. OMEGA: The operational multiscale environment model with grid adaptivity

    SciTech Connect

    Bacon, D.P.

    1995-07-01

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.

  15. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  16. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  17. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    SciTech Connect

    Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both

  18. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  19. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  20. Parallel S{sub n} Sweeps on Unstructured Grids: Algorithms for Prioritization, Grid Partitioning, and Cycle Detection

    SciTech Connect

    Plimpton, Steven J.; Hendrickson, Bruce; Burns, Shawn P.; McLendon, William III; Rauchwerger, Lawrence

    2005-07-15

    The method of discrete ordinates is commonly used to solve the Boltzmann transport equation. The solution in each ordinate direction is most efficiently computed by sweeping the radiation flux across the computational grid. For unstructured grids this poses many challenges, particularly when implemented on distributed-memory parallel machines where the grid geometry is spread across processors. We present several algorithms relevant to this approach: (a) an asynchronous message-passing algorithm that performs sweeps simultaneously in multiple ordinate directions, (b) a simple geometric heuristic to prioritize the computational tasks that a processor works on, (c) a partitioning algorithm that creates columnar-style decompositions for unstructured grids, and (d) an algorithm for detecting and eliminating cycles that sometimes exist in unstructured grids and can prevent sweeps from successfully completing. Algorithms (a) and (d) are fully parallel; algorithms (b) and (c) can be used in conjunction with (a) to achieve higher parallel efficiencies. We describe our message-passing implementations of these algorithms within a radiation transport package. Performance and scalability results are given for unstructured grids with up to 3 million elements (500 million unknowns) running on thousands of processors of Sandia National Laboratories' Intel Tflops machine and DEC-Alpha CPlant cluster.

  1. CHARACTERIZATION OF DISCONTINUITIES IN HIGH-DIMENSIONAL STOCHASTIC PROBLEMS ON ADAPTIVE SPARSE GRIDS

    SciTech Connect

    Jakeman, John D; Archibald, Richard K; Xiu, Dongbin

    2011-01-01

    In this paper we present a set of efficient algorithms for detection and identification of discontinuities in high dimensional space. The method is based on extension of polynomial annihilation for edge detection in low dimensions. Compared to the earlier work, the present method poses significant improvements for high dimensional problems. The core of the algorithms relies on adaptive refinement of sparse grids. It is demonstrated that in the commonly encountered cases where a discontinuity resides on a small subset of the dimensions, the present method becomes optimal , in the sense that the total number of points required for function evaluations depends linearly on the dimensionality of the space. The details of the algorithms will be presented and various numerical examples are utilized to demonstrate the efficacy of the method.

  2. Adaptive grid artifact reduction in the frequency domain with spatial properties for x-ray images

    NASA Astrophysics Data System (ADS)

    Kim, Dong Sik; Lee, Sanggyun

    2012-03-01

    By applying band-rejection filters (BRFs) in the frequency domain, we can efficiently reduce the grid artifacts, which are caused by using the antiscatter grid in obtaining x-ray digital images. However, if the frequency component of the grid artifact is relatively close to that of the object, then simply applying a BRF may seriously distort the object and cause the ringing artifacts. Since the ringing artifacts are quite dependent on the shape of the object to be recovered in the spatial domain, the spatial property of the x-ray image should be considered in applying BRFs. In this paper, we propose an adaptive filtering scheme, which can cooperate such different properties in the spatial domain. In the spatial domain, we compare several approaches, such as the mangnitude, edge, and frequency-modulation (FM) model-based algorithms, to detect the ringing artifact or the grid artifact component. In order to perform a robust detection whether the ringing artifact is strong or not, we employ the FM model for the extracted signal, which corresponds to a specific grid artifact. A detection of the position for the ringing artifact is then conducted based on the slope detection algorithm, which is commonly used as an FM discriminator in the communication area. However, the detected position of the ringing artifact is not accurate. Hence, in order to obtain an accurate detection result, we combine the edge-based approach with the FM model approach. Numerical result for real x-ray images shows that applying BRFs in the frequency domain in conjunction with the spatial property of the ringing artifact can successfully remove the grid artifact, distorting the object less.

  3. An Adaptive Path Planning Algorithm for Cooperating Unmanned Air Vehicles

    SciTech Connect

    Cunningham, C.T.; Roberts, R.S.

    2000-09-12

    An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

  4. Adaptive path planning algorithm for cooperating unmanned air vehicles

    SciTech Connect

    Cunningham, C T; Roberts, R S

    2001-02-08

    An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.

  5. Automation of assertion testing - Grid and adaptive techniques

    NASA Technical Reports Server (NTRS)

    Andrews, D. M.

    1985-01-01

    Assertions can be used to automate the process of testing software. Two methods for automating the generation of input test data are described in this paper. One method selects the input values of variables at regular intervals in a 'grid'. The other, adaptive testing, uses assertion violations as a measure of errors detected and generates new test cases based on test results. The important features of assertion testing are that: it can be used throughout the entire testing cycle; it provides automatic notification of error conditions; and it can be used with automatic input generation techniques which eliminate the subjectivity in choosing test data.

  6. Modeling scramjet combustor flowfields with a grid adaptation scheme

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Singh, D. J.

    1994-01-01

    The accurate description of flow features associated with the normal injection of fuel into supersonic primary flows is essential in the design of efficient engines for hypervelocity aerospace vehicles. The flow features in such injections are complex with multiple interactions between shocks and between shocks boundary layers. Numerical studies of perpendicular sonic N2 injection and mixing in a Mach 3.8 scramjet combustor environment are discussed. A dynamic grid adaptation procedure based on the equilibration of spring-mass system is employed to enhanced the description of the complicated flow features. Numerical results are compared with experimental measurements and indicate that the adaptation procedure enhances the capability of the modeling procedure to describe the flow features associated with scramjet combustor components.

  7. A multilevel Cartesian non-uniform grid time domain algorithm

    SciTech Connect

    Meng Jun; Boag, Amir; Lomakin, Vitaliy; Michielssen, Eric

    2010-11-01

    A multilevel Cartesian non-uniform grid time domain algorithm (CNGTDA) is introduced to rapidly compute transient wave fields radiated by time dependent three-dimensional source constellations. CNGTDA leverages the observation that transient wave fields generated by temporally bandlimited and spatially confined source constellations can be recovered via interpolation from appropriately delay- and amplitude-compensated field samples. This property is used in conjunction with a multilevel scheme, in which the computational domain is hierarchically decomposed into subdomains with sparse non-uniform grids used to obtain the fields. For both surface and volumetric source distributions, the computational cost of CNGTDA to compute the transient field at N{sub s} observation locations from N{sub s} collocated sources for N{sub t} discrete time instances scales as O(N{sub t}N{sub s}logN{sub s}) and O(N{sub t}N{sub s}log{sup 2}N{sub s}) in the low- and high-frequency regimes, respectively. Coupled with marching-on-in-time (MOT) time domain integral equations, CNGTDA can facilitate efficient analysis of large scale time domain electromagnetic and acoustic problems.

  8. An efficient second-order accurate and continuous interpolation for block-adaptive grids

    NASA Astrophysics Data System (ADS)

    Borovikov, Dmitry; Sokolov, Igor V.; Tóth, Gábor

    2015-09-01

    In this paper we present a second-order and continuous interpolation algorithm for cell-centered adaptive-mesh-refinement (AMR) grids. Continuity requirement poses a non-trivial problem at resolution changes. We develop a classification of the resolution changes, which allows us to employ efficient and simple linear interpolation in the majority of the computational domain. The algorithm is well suited for massively parallel computations. Our interpolation algorithm allows extracting jump-free interpolated data distribution along lines and surfaces within the computational domain. This capability is important for various applications, including kinetic particles tracking in three dimensional vector fields, visualization (i.e. surface extraction) and extracting variables along one-dimensional curves such as field lines, streamlines and satellite trajectories, etc. Particular examples are models for acceleration of solar energetic particles (SEPs) along magnetic field-lines. As such models are sensitive to sharp gradients and discontinuities the capability to interpolate the data from the AMR grid to be passed to the SEP model without producing false gradients numerically becomes crucial. We provide a complete description of the algorithm and make the code publicly available as a Fortran 90 library.

  9. An adaptive replacement algorithm for paged-memory computer systems.

    NASA Technical Reports Server (NTRS)

    Thorington, J. M., Jr.; Irwin, J. D.

    1972-01-01

    A general class of adaptive replacement schemes for use in paged memories is developed. One such algorithm, called SIM, is simulated using a probability model that generates memory traces, and the results of the simulation of this adaptive scheme are compared with those obtained using the best nonlookahead algorithms. A technique for implementing this type of adaptive replacement algorithm with state of the art digital hardware is also presented.

  10. Adaptive sparse grid expansions of the vibrational Hamiltonian

    NASA Astrophysics Data System (ADS)

    Strobusch, D.; Scheurer, Ch.

    2014-02-01

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  11. Adaptive sparse grid expansions of the vibrational Hamiltonian

    SciTech Connect

    Strobusch, D.; Scheurer, Ch.

    2014-02-21

    The vibrational Hamiltonian involves two high dimensional operators, the kinetic energy operator (KEO), and the potential energy surface (PES). Both must be approximated for systems involving more than a few atoms. Adaptive approximation schemes are not only superior to truncated Taylor or many-body expansions (MBE), they also allow for error estimates, and thus operators of predefined precision. To this end, modified sparse grids (SG) are developed that can be combined with adaptive MBEs. This MBE/SG hybrid approach yields a unified, fully adaptive representation of the KEO and the PES. Refinement criteria, based on the vibrational self-consistent field (VSCF) and vibrational configuration interaction (VCI) methods, are presented. The combination of the adaptive MBE/SG approach and the VSCF plus VCI methods yields a black box like procedure to compute accurate vibrational spectra. This is demonstrated on a test set of molecules, comprising water, formaldehyde, methanimine, and ethylene. The test set is first employed to prove convergence for semi-empirical PM3-PESs and subsequently to compute accurate vibrational spectra from CCSD(T)-PESs that agree well with experimental values.

  12. The use of solution adaptive grids in solving partial differential equations

    NASA Technical Reports Server (NTRS)

    Anderson, D. A.; Rai, M. M.

    1982-01-01

    The grid point distribution used in solving a partial differential equation using a numerical method has a substantial influence on the quality of the solution. An adaptive grid which adjusts as the solution changes provides the best results when the number of grid points available for use during the calculation is fixed. Basic concepts used in generating and applying adaptive grids are reviewed in this paper, and examples illustrating applications of these concepts are presented.

  13. Multi-Hop Localization Algorithm Based on Grid-Scanning for Wireless Sensor Networks*

    PubMed Central

    Wan, Jiangwen; Guo, Xiaolei; Yu, Ning; Wu, Yinfeng; Feng, Renjian

    2011-01-01

    For large-scale wireless sensor networks (WSNs) with a minority of anchor nodes, multi-hop localization is a popular scheme for determining the geographical positions of the normal nodes. However, in practice existing multi-hop localization methods suffer from various kinds of problems, such as poor adaptability to irregular topology, high computational complexity, low positioning accuracy, etc. To address these issues in this paper, we propose a novel Multi-hop Localization algorithm based on Grid-Scanning (MLGS). First, the factors that influence the multi-hop distance estimation are studied and a more realistic multi-hop localization model is constructed. Then, the feasible regions of the normal nodes are determined according to the intersection of bounding square rings. Finally, a verifiably good approximation scheme based on grid-scanning is developed to estimate the coordinates of the normal nodes. Additionally, the positioning accuracy of the normal nodes can be improved through neighbors’ collaboration. Extensive simulations are performed in isotropic and anisotropic networks. The comparisons with some typical algorithms of node localization confirm the effectiveness and efficiency of our algorithm. PMID:22163828

  14. Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, Gary Patrick

    1990-01-01

    A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.

  15. Parallel Implementation and Scaling of an Adaptive Mesh Discrete Ordinates Algorithm for Transport

    SciTech Connect

    Howell, L H

    2004-11-29

    Block-structured adaptive mesh refinement (AMR) uses a mesh structure built up out of locally-uniform rectangular grids. In the BoxLib parallel framework used by the Raptor code, each processor operates on one or more of these grids at each refinement level. The decomposition of the mesh into grids and the distribution of these grids among processors may change every few timesteps as a calculation proceeds. Finer grids use smaller timesteps than coarser grids, requiring additional work to keep the system synchronized and ensure conservation between different refinement levels. In a paper for NECDC 2002 I presented preliminary results on implementation of parallel transport sweeps on the AMR mesh, conjugate gradient acceleration, accuracy of the AMR solution, and scalar speedup of the AMR algorithm compared to a uniform fully-refined mesh. This paper continues with a more in-depth examination of the parallel scaling properties of the scheme, both in single-level and multi-level calculations. Both sweeping and setup costs are considered. The algorithm scales with acceptable performance to several hundred processors. Trends suggest, however, that this is the limit for efficient calculations with traditional transport sweeps, and that modifications to the sweep algorithm will be increasingly needed as job sizes in the thousands of processors become common.

  16. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  17. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.

  18. Improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns

    NASA Astrophysics Data System (ADS)

    Li, Mengyang; Li, Dahai; Zhang, Chen; E, Kewei; Hong, Zhihan; Li, Chengxu

    2015-08-01

    Zonal wavefront reconstruction by use of the well known Southwell algorithm with rectangular grid patterns has been considered in the literature. However, when the grid patterns are nonrectangular, modal wavefront reconstruction has been extensively used. We propose an improved zonal wavefront reconstruction algorithm for Hartmann type test with arbitrary grid patterns. We develop the mathematical expressions to show that the wavefront over arbitrary grid patterns, such as misaligned, partly obscured, and non-square mesh grids, can be estimated well. Both iterative solution and least-square solution for the proposed algorithm are described and compared. Numerical calculation shows that the zonal wavefront reconstruction over nonrectangular profile with the proposed algorithm results in a significant improvement in comparison with the Southwell algorithm.

  19. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  20. Anisotropic Solution Adaptive Unstructured Grid Generation Using AFLR

    NASA Technical Reports Server (NTRS)

    Marcum, David L.

    2007-01-01

    An existing volume grid generation procedure, AFLR3, was successfully modified to generate anisotropic tetrahedral elements using a directional metric transformation defined at source nodes. The procedure can be coupled with a solver and an error estimator as part of an overall anisotropic solution adaptation methodology. It is suitable for use with an error estimator based on an adjoint, optimization, sensitivity derivative, or related approach. This offers many advantages, including more efficient point placement along with robust and efficient error estimation. It also serves as a framework for true grid optimization wherein error estimation and computational resources can be used as cost functions to determine the optimal point distribution. Within AFLR3 the metric transformation is implemented using a set of transformation vectors and associated aspect ratios. The modified overall procedure is presented along with details of the anisotropic transformation implementation. Multiple two-and three-dimensional examples are also presented that demonstrate the capability of the modified AFLR procedure to generate anisotropic elements using a set of source nodes with anisotropic transformation metrics. The example cases presented use moderate levels of anisotropy and result in usable element quality. Future testing with various flow solvers and methods for obtaining transformation metric information is needed to determine practical limits and evaluate the efficacy of the overall approach.

  1. An Adaptive Unified Differential Evolution Algorithm for Global Optimization

    SciTech Connect

    Qiang, Ji; Mitchell, Chad

    2014-11-03

    In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.

  2. A wavelet-optimized, very high order adaptive grid and order numerical method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Differencing operators of arbitrarily high order can be constructed by interpolating a polynomial through a set of data followed by differentiation of this polynomial and finally evaluation of the polynomial at the point where a derivative approximation is desired. Furthermore, the interpolating polynomial can be constructed from algebraic, trigonometric, or, perhaps exponential polynomials. This paper begins with a comparison of such differencing operator construction. Next, the issue of proper grids for high order polynomials is addressed. Finally, an adaptive numerical method is introduced which adapts the numerical grid and the order of the differencing operator depending on the data. The numerical grid adaptation is performed on a Chebyshev grid. That is, at each level of refinement the grid is a Chebvshev grid and this grid is refined locally based on wavelet analysis.

  3. Adaptive DNA Computing Algorithm by Using PCR and Restriction Enzyme

    NASA Astrophysics Data System (ADS)

    Kon, Yuji; Yabe, Kaoru; Rajaee, Nordiana; Ono, Osamu

    In this paper, we introduce an adaptive DNA computing algorithm by using polymerase chain reaction (PCR) and restriction enzyme. The adaptive algorithm is designed based on Adleman-Lipton paradigm[3] of DNA computing. In this work, however, unlike the Adleman- Lipton architecture a cutting operation has been introduced to the algorithm and the mechanism in which the molecules used by computation were feedback to the next cycle devised. Moreover, the amplification by PCR is performed in the molecule used by feedback and the difference concentration arisen in the base sequence can be used again. By this operation the molecules which serve as a solution candidate can be reduced down and the optimal solution is carried out in the shortest path problem. The validity of the proposed adaptive algorithm is considered with the logical simulation and finally we go on to propose applying adaptive algorithm to the chemical experiment which used the actual DNA molecules for solving an optimal network problem.

  4. Three-dimensional adaptive grid generation for body-fitted coordinate system

    NASA Astrophysics Data System (ADS)

    Chen, S. C.

    1988-08-01

    This report describes a numerical method for generating 3-D grids for general configurations. The basic method involves the solution of a set of quasi-linear elliptic partial differential equations via pointwise relaxation with a local relaxation factor. It allows specification of the grid spacing off the boundary surfaces and the grid orthogonality at the boundary surfaces. It includes adaptive mechanisms to improve smoothness, orthogonality, and flow resolution in the grid interior.

  5. Three-dimensional adaptive grid generation for body-fitted coordinate system

    NASA Astrophysics Data System (ADS)

    Chen, S. C.

    This report describes a numerical method for generating 3-D grids for general configurations. The basic method involves the solution of a set of quasi-linear elliptic partial differential equations via pointwise relaxation with a local relaxation factor. It allows specification of the grid spacing off the boundary surfaces and the grid orthogonality at the boundary surfaces. It includes adaptive mechanisms to improve smoothness, orthogonality, and flow resolution in the grid interior.

  6. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  7. The use of the spectral method within the fast adaptive composite grid method

    SciTech Connect

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  8. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  9. Cartesian Off-Body Grid Adaption for Viscous Time- Accurate Flow Simulation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Pulliam, Thomas H.

    2011-01-01

    An improved solution adaption capability has been implemented in the OVERFLOW overset grid CFD code. Building on the Cartesian off-body approach inherent in OVERFLOW and the original adaptive refinement method developed by Meakin, the new scheme provides for automated creation of multiple levels of finer Cartesian grids. Refinement can be based on the undivided second-difference of the flow solution variables, or on a specific flow quantity such as vorticity. Coupled with load-balancing and an inmemory solution interpolation procedure, the adaption process provides very good performance for time-accurate simulations on parallel compute platforms. A method of using refined, thin body-fitted grids combined with adaption in the off-body grids is presented, which maximizes the part of the domain subject to adaption. Two- and three-dimensional examples are used to illustrate the effectiveness and performance of the adaption scheme.

  10. Analysis of a Major Electric Grid -- Stability and Adaptive Protection

    NASA Astrophysics Data System (ADS)

    Alanzi, Sultan

    Protective systems of the electric grid are designed to detect and mitigate the effects of faults and other disturbances that may occur. Distance relays are used extensively for the detection of faults on transmission lines. Out-of-step relays are used for generator protection to detect loss of synchronism conditions that result from disturbances on the electric grid. Also, when a disturbance occurs and generators may tend to lose synchronism with each other, it is beneficial to separate the overall system into several independent systems that can remain stable. Unfortunately there have been cases, such as the 2003 Northeast blackout where the operation of protective relays, namely the zone 3 distance relay used for transmission line protection, contributed to the cascading effect of the blackout. It is the objective of this dissertation to propose adaptive relays for both distance protection of transmission lines and out-of-step protection of generators. By being adaptive, the relays are made aware of the system operating conditions and can adjust its settings accordingly. Inputs to the adaptive logic can come from system or environmental conditions. As a result of this effort, a new distance relay operating characteristic is proposed, referred to as a mushroom relay, which is a combination of a quadrilateral relay and a Mho relay. Also, a new criterion for determining if a power swing following a disturbance is stable or unstable is proposed. Distance protection of transmission lines is very important when discussing system responses to faults and disturbances. Distance relays are very common worldwide and although they offer great protection, there are limitations that need to be addressed. Parallel line operations (infeed effect) and the loadability limits are among the limitations that lead to improper response of relays. An Adaptive Distance Relays (ADR) offer great benefits to the protection scheme as their settings can be changed in accordance with prefault

  11. Analysis of a Major Electric Grid -- Stability and Adaptive Protection

    NASA Astrophysics Data System (ADS)

    Alanzi, Sultan

    Protective systems of the electric grid are designed to detect and mitigate the effects of faults and other disturbances that may occur. Distance relays are used extensively for the detection of faults on transmission lines. Out-of-step relays are used for generator protection to detect loss of synchronism conditions that result from disturbances on the electric grid. Also, when a disturbance occurs and generators may tend to lose synchronism with each other, it is beneficial to separate the overall system into several independent systems that can remain stable. Unfortunately there have been cases, such as the 2003 Northeast blackout where the operation of protective relays, namely the zone 3 distance relay used for transmission line protection, contributed to the cascading effect of the blackout. It is the objective of this dissertation to propose adaptive relays for both distance protection of transmission lines and out-of-step protection of generators. By being adaptive, the relays are made aware of the system operating conditions and can adjust its settings accordingly. Inputs to the adaptive logic can come from system or environmental conditions. As a result of this effort, a new distance relay operating characteristic is proposed, referred to as a mushroom relay, which is a combination of a quadrilateral relay and a Mho relay. Also, a new criterion for determining if a power swing following a disturbance is stable or unstable is proposed. Distance protection of transmission lines is very important when discussing system responses to faults and disturbances. Distance relays are very common worldwide and although they offer great protection, there are limitations that need to be addressed. Parallel line operations (infeed effect) and the loadability limits are among the limitations that lead to improper response of relays. An Adaptive Distance Relays (ADR) offer great benefits to the protection scheme as their settings can be changed in accordance with prefault

  12. Adaptive data management in the ARC Grid middleware

    NASA Astrophysics Data System (ADS)

    Cameron, D.; Gholami, A.; Karpenko, D.; Konstantinov, A.

    2011-12-01

    The Advanced Resource Connector (ARC) Grid middleware was designed almost 10 years ago, and has proven to be an attractive distributed computing solution and successful in adapting to new data management and storage technologies. However, with an ever-increasing user base and scale of resources to manage, along with the introduction of more advanced data transfer protocols, some limitations in the current architecture have become apparent. The simple first-in first-out approach to data transfer leads to bottlenecks in the system, as does the built-in assumption that all data is immediately available from remote data storage. We present an entirely new data management architecture for ARC which aims to alleviate these problems, by introducing a three-layer structure. The top layer accepts incoming requests for data transfer and directs them to the middle layer, which schedules individual transfers and negotiates with various intermediate catalog and storage systems until the physical file is ready to be transferred. The lower layer performs all operations which use large amounts of bandwidth, i.e. the physical data transfer. Using such a layered structure allows more efficient use of the available bandwidth as well as enabling late-binding of jobs to data transfer slots based on a priority system. Here we describe in full detail the design and implementation of the new system.

  13. Self-adaptive genetic algorithms with simulated binary crossover.

    PubMed

    Deb, K; Beyer, H G

    2001-01-01

    Self-adaptation is an essential feature of natural evolution. However, in the context of function optimization, self-adaptation features of evolutionary search algorithms have been explored mainly with evolution strategy (ES) and evolutionary programming (EP). In this paper, we demonstrate the self-adaptive feature of real-parameter genetic algorithms (GAs) using a simulated binary crossover (SBX) operator and without any mutation operator. The connection between the working of self-adaptive ESs and real-parameter GAs with the SBX operator is also discussed. Thereafter, the self-adaptive behavior of real-parameter GAs is demonstrated on a number of test problems commonly used in the ES literature. The remarkable similarity in the working principle of real-parameter GAs and self-adaptive ESs shown in this study suggests the need for emphasizing further studies on self-adaptive GAs. PMID:11382356

  14. Generation and adaptation of 3-D unstructured grids for transient problems

    NASA Technical Reports Server (NTRS)

    Loehner, Rainald

    1990-01-01

    Grid generation and adaptive refinement techniques suitable for the simulation of strongly unsteady flows past geometrically complex bodies in 3-D are described. The grids are generated using the advancing front technique. Emphasis is placed not to generate elements that are too small, as this would severely increase the cost of simulations with explicit flow solvers. The grids are adapted to an evolving flowfield using simple h-refinement. A grid change is performed every 5 to 10 timesteps, and only one level of refinement/coarsening is allowed per mesh change.

  15. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  16. Automated Grid Disruption Response System: Robust Adaptive Topology Control (RATC)

    SciTech Connect

    2012-03-01

    GENI Project: The RATC research team is using topology control as a mechanism to improve system operations and manage disruptions within the electric grid. The grid is subject to interruption from cascading faults caused by extreme operating conditions, malicious external attacks, and intermittent electricity generation from renewable energy sources. The RATC system is capable of detecting, classifying, and responding to grid disturbances by reconfiguring the grid in order to maintain economically efficient operations while guaranteeing reliability. The RATC system would help prevent future power outages, which account for roughly $80 billion in losses for businesses and consumers each year. Minimizing the time it takes for the grid to respond to expensive interruptions will also make it easier to integrate intermittent renewable energy sources into the grid.

  17. Adaptive path planning: Algorithm and analysis

    SciTech Connect

    Chen, Pang C.

    1995-03-01

    To address the need for a fast path planner, we present a learning algorithm that improves path planning by using past experience to enhance future performance. The algorithm relies on an existing path planner to provide solutions difficult tasks. From these solutions, an evolving sparse work of useful robot configurations is learned to support faster planning. More generally, the algorithm provides a framework in which a slow but effective planner may be improved both cost-wise and capability-wise by a faster but less effective planner coupled with experience. We analyze algorithm by formalizing the concept of improvability and deriving conditions under which a planner can be improved within the framework. The analysis is based on two stochastic models, one pessimistic (on task complexity), the other randomized (on experience utility). Using these models, we derive quantitative bounds to predict the learning behavior. We use these estimation tools to characterize the situations in which the algorithm is useful and to provide bounds on the training time. In particular, we show how to predict the maximum achievable speedup. Additionally, our analysis techniques are elementary and should be useful for studying other types of probabilistic learning as well.

  18. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockhard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  19. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    NASA Astrophysics Data System (ADS)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  20. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  1. The parallelization of an advancing-front, all-quadrilateral meshing algorithm for adaptive analysis

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Cairncross, R.A.

    1995-11-01

    The ability to perform effective adaptive analysis has become a critical issue in the area of physical simulation. Of the multiple technologies required to realize a parallel adaptive analysis capability, automatic mesh generation is an enabling technology, filling a critical need in the appropriate discretization of a problem domain. The paving algorithm`s unique ability to generate a function-following quadrilateral grid is a substantial advantage in Sandia`s pursuit of a modified h-method adaptive capability. This characteristic combined with a strong transitioning ability allow the paving algorithm to place elements where an error function indicates more mesh resolution is needed. Although the original paving algorithm is highly serial, a two stage approach has been designed to parallelize the algorithm but also retain the nice qualities of the serial algorithm. The authors approach also allows the subdomain decomposition used by the meshing code to be shared with the finite element physics code, eliminating the need for data transfer across the processors between the analysis and remeshing steps. In addition, the meshed subdomains are adjusted with a dynamic load balancer to improve the original decomposition and maintain load efficiency each time the mesh has been regenerated. This initial parallel implementation assumes an approach of restarting the physics problem from time zero at each interaction, with a refined mesh adapting to the previous iterations objective function. The remeshing tools are being developed to enable real time remeshing and geometry regeneration. Progress on the redesign of the paving algorithm for parallel operation is discussed including extensions allowing adaptive control and geometry regeneration.

  2. Simulation of the dispersion of nuclear contamination using an adaptive Eulerian grid model.

    PubMed

    Lagzi, I; Kármán, D; Turányi, T; Tomlin, A S; Haszpra, L

    2004-01-01

    Application of an Eulerian model using layered adaptive unstructured grids coupled to a meso-scale meteorological model is presented for modelling the dispersion of nuclear contamination following the accidental release from a single but strong source to the atmosphere. The model automatically places a finer resolution grid, adaptively in time, in regions were high spatial numerical error is expected. The high-resolution grid region follows the movement of the contaminated air over time. Using this method, grid resolutions of the order of 6 km can be achieved in a computationally effective way. The concept is illustrated by the simulation of hypothetical nuclear accidents at the Paks NPP, in Central Hungary. The paper demonstrates that the adaptive model can achieve accuracy comparable to that of a high-resolution Eulerian model using significantly less grid points and computer simulation time. PMID:15149762

  3. Synchronization Algorithms for Co-Simulation of Power Grid and Communication Networks

    SciTech Connect

    Ciraci, Selim; Daily, Jeffrey A.; Agarwal, Khushbu; Fuller, Jason C.; Marinovici, Laurentiu D.; Fisher, Andrew R.

    2014-09-11

    The ongoing modernization of power grids consists of integrating them with communication networks in order to achieve robust and resilient control of grid operations. To understand the operation of the new smart grid, one approach is to use simulation software. Unfortunately, current power grid simulators at best utilize inadequate approximations to simulate communication networks, if at all. Cooperative simulation of specialized power grid and communication network simulators promises to more accurately reproduce the interactions of real smart grid deployments. However, co-simulation is a challenging problem. A co-simulation must manage the exchange of informa- tion, including the synchronization of simulator clocks, between all simulators while maintaining adequate computational perfor- mance. This paper describes two new conservative algorithms for reducing the overhead of time synchronization, namely Active Set Conservative and Reactive Conservative. We provide a detailed analysis of their performance characteristics with respect to the current state of the art including both conservative and optimistic synchronization algorithms. In addition, we provide guidelines for selecting the appropriate synchronization algorithm based on the requirements of the co-simulation. The newly proposed algorithms are shown to achieve as much as 14% and 63% im- provement, respectively, over the existing conservative algorithm.

  4. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  5. A Novel Hybrid Self-Adaptive Bat Algorithm

    PubMed Central

    Fister, Iztok; Brest, Janez

    2014-01-01

    Nature-inspired algorithms attract many researchers worldwide for solving the hardest optimization problems. One of the newest members of this extensive family is the bat algorithm. To date, many variants of this algorithm have emerged for solving continuous as well as combinatorial problems. One of the more promising variants, a self-adaptive bat algorithm, has recently been proposed that enables a self-adaptation of its control parameters. In this paper, we have hybridized this algorithm using different DE strategies and applied these as a local search heuristics for improving the current best solution directing the swarm of a solution towards the better regions within a search space. The results of exhaustive experiments were promising and have encouraged us to invest more efforts into developing in this direction. PMID:25187904

  6. An adaptive algorithm for low contrast infrared image enhancement

    NASA Astrophysics Data System (ADS)

    Liu, Sheng-dong; Peng, Cheng-yuan; Wang, Ming-jia; Wu, Zhi-guo; Liu, Jia-qi

    2013-08-01

    An adaptive infrared image enhancement algorithm for low contrast is proposed in this paper, to deal with the problem that conventional image enhancement algorithm is not able to effective identify the interesting region when dynamic range is large in image. This algorithm begin with the human visual perception characteristics, take account of the global adaptive image enhancement and local feature boost, not only the contrast of image is raised, but also the texture of picture is more distinct. Firstly, the global image dynamic range is adjusted from the overall, the dynamic range of original image and display grayscale form corresponding relationship, the gray scale of bright object is raised and the the gray scale of dark target is reduced at the same time, to improve the overall image contrast. Secondly, the corresponding filtering algorithm is used on the current point and its neighborhood pixels to extract image texture information, to adjust the brightness of the current point in order to enhance the local contrast of the image. The algorithm overcomes the default that the outline is easy to vague in traditional edge detection algorithm, and ensure the distinctness of texture detail in image enhancement. Lastly, we normalize the global luminance adjustment image and the local brightness adjustment image, to ensure a smooth transition of image details. A lot of experiments is made to compare the algorithm proposed in this paper with other convention image enhancement algorithm, and two groups of vague IR image are taken in experiment. Experiments show that: the contrast ratio of the picture is boosted after handled by histogram equalization algorithm, but the detail of the picture is not clear, the detail of the picture can be distinguished after handled by the Retinex algorithm. The image after deal with by self-adaptive enhancement algorithm proposed in this paper becomes clear in details, and the image contrast is markedly improved in compared with Retinex

  7. An adaptive, lossless data compression algorithm and VLSI implementations

    NASA Technical Reports Server (NTRS)

    Venbrux, Jack; Zweigle, Greg; Gambles, Jody; Wiseman, Don; Miller, Warner H.; Yeh, Pen-Shu

    1993-01-01

    This paper first provides an overview of an adaptive, lossless, data compression algorithm originally devised by Rice in the early '70s. It then reports the development of a VLSI encoder/decoder chip set developed which implements this algorithm. A recent effort in making a space qualified version of the encoder is described along with several enhancements to the algorithm. The performance of the enhanced algorithm is compared with those from other currently available lossless compression techniques on multiple sets of test data. The results favor our implemented technique in many applications.

  8. An Algorithm for Converting Contours to Elevation Grids.

    ERIC Educational Resources Information Center

    Reid-Green, Keith S.

    Some of the test questions for the National Council of Architectural Registration Boards deal with the site, including drainage, regrading, and the like. Some questions are most easily scored by examining contours, but others, such as water flow questions, are best scored from a grid in which each element is assigned its average elevation. This…

  9. Three-dimensional algorithms for grid restructuring in Free-Lagrangian calculations

    NASA Technical Reports Server (NTRS)

    Fritts, M.

    1985-01-01

    Grid restructuring algorithms which lower the price of three-dimensional Free-Lagrange calculations are presented. The algorithms are first given for the case of planar triangulated surfaces embedded in and spanning a three-dimensional region. The tetrahedra generated by this technique form a Delaunay mesh if the interplane spacing is comparable to the resolution within the planes. The algorithm can therefore be used for efficient determinations of Voronoi connections for initial grids. Modifications of the algorithm for the case of closely spaced surfaces are demonstrated in the context of restructuring algorithms which can accommodate colliding surfaces. Then, the restriction to planar surfaces is removed and regular surfaces are examined. The basic algorithm is the same, with an additional operation to project the vertices of one surface onto another. Finally, vertices on the surface are allowed to migrate anywhere in space.

  10. Adaptive image contrast enhancement algorithm for point-based rendering

    NASA Astrophysics Data System (ADS)

    Xu, Shaoping; Liu, Xiaoping P.

    2015-03-01

    Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.

  11. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    NASA Technical Reports Server (NTRS)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  12. An Adaptive Hybrid Algorithm for Global Network Alignment.

    PubMed

    Xie, Jiang; Xiang, Chaojuan; Ma, Jin; Tan, Jun; Wen, Tieqiao; Lei, Jinzhi; Nie, Qing

    2016-01-01

    It is challenging to obtain reliable and optimal mapping between networks for alignment algorithms when both nodal and topological structures are taken into consideration due to the underlying NP-hard problem. Here, we introduce an adaptive hybrid algorithm that combines the classical Hungarian algorithm and the Greedy algorithm (HGA) for the global alignment of biomolecular networks. With this hybrid algorithm, every pair of nodes with one in each network is first aligned based on node information (e.g., their sequence attributes) and then followed by an adaptive and convergent iteration procedure for aligning the topological connections in the networks. For four well-studied protein interaction networks, i.e., C.elegans, yeast, D.melanogaster, and human, applications of HGA lead to improved alignments in acceptable running time. The mapping between yeast and human PINs obtained by the new algorithm has the largest value of common gene ontology (GO) terms compared to those obtained by other existing algorithms, while it still has lower Mean normalized entropy (MNE) and good performances on several other measures. Overall, the adaptive HGA is effective and capable of providing good mappings between aligned networks in which the biological properties of both the nodes and the connections are important. PMID:27295633

  13. AN ADAPTIVE GRID ALGORITHM FOR AIR QUALITY MODELING. (R827028)

    EPA Science Inventory

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Adaptive sensor tasking using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Shea, Peter J.; Kirk, Joe; Welchons, Dave

    2007-04-01

    Today's battlefield environment contains a large number of sensors, and sensor types, onboard multiple platforms. The set of sensor types includes SAR, EO/IR, GMTI, AMTI, HSI, MSI, and video, and for each sensor type there may be multiple sensing modalities to select from. In an attempt to maximize sensor performance, today's sensors employ either static tasking approaches or require an operator to manually change sensor tasking operations. In a highly dynamic environment this leads to a situation whereby the sensors become less effective as the sensing environments deviates from the assumed conditions. Through a Phase I SBIR effort we developed a system architecture and a common tasking approach for solving the sensor tasking problem for a multiple sensor mix. As part of our sensor tasking effort we developed a genetic algorithm based task scheduling approach and demonstrated the ability to automatically task and schedule sensors in an end-to-end closed loop simulation. Our approach allows for multiple sensors as well as system and sensor constraints. This provides a solid foundation for our future efforts including incorporation of other sensor types. This paper will describe our approach for scheduling using genetic algorithms to solve the sensor tasking problem in the presence of resource constraints and required task linkage. We will conclude with a discussion of results for a sample problem and of the path forward.

  15. Locally-adaptive and memetic evolutionary pattern search algorithms.

    PubMed

    Hart, William E

    2003-01-01

    Recent convergence analyses of evolutionary pattern search algorithms (EPSAs) have shown that these methods have a weak stationary point convergence theory for a broad class of unconstrained and linearly constrained problems. This paper describes how the convergence theory for EPSAs can be adapted to allow each individual in a population to have its own mutation step length (similar to the design of evolutionary programing and evolution strategies algorithms). These are called locally-adaptive EPSAs (LA-EPSAs) since each individual's mutation step length is independently adapted in different local neighborhoods. The paper also describes a variety of standard formulations of evolutionary algorithms that can be used for LA-EPSAs. Further, it is shown how this convergence theory can be applied to memetic EPSAs, which use local search to refine points within each iteration. PMID:12804096

  16. Adaptive-mesh algorithms for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Roe, Philip L.; Quirk, James

    1993-01-01

    The basic goal of adaptive-mesh algorithms is to distribute computational resources wisely by increasing the resolution of 'important' regions of the flow and decreasing the resolution of regions that are less important. While this goal is one that is worthwhile, implementing schemes that have this degree of sophistication remains more of an art than a science. In this paper, the basic pieces of adaptive-mesh algorithms are described and some of the possible ways to implement them are discussed and compared. These basic pieces are the data structure to be used, the generation of an initial mesh, the criterion to be used to adapt the mesh to the solution, and the flow-solver algorithm on the resulting mesh. Each of these is discussed, with particular emphasis on methods suitable for the computation of compressible flows.

  17. Development of a Dynamic Operational Scheduling Algorithm for an Independent Micro-Grid with Renewable Energy

    NASA Astrophysics Data System (ADS)

    Obara, Shin'ya

    A micro-grid with the capacity for sustainable energy is expected to be a distributed energy system that exhibits quite a small environmental impact. In an independent micro-grid, “green energy,” which is typically thought of as unstable, can be utilized effectively by introducing a battery. In the past study, the production-of-electricity prediction algorithm (PAS) of the solar cell was developed. In PAS, a layered neural network is made to learn based on past weather data and the operation plan of the compound system of a solar cell and other energy systems was examined using this prediction algorithm. In this paper, a dynamic operational scheduling algorithm is developed using a neural network (PAS) and a genetic algorithm (GA) to provide predictions for solar cell power output. We also do a case study analysis in which we use this algorithm to plan the operation of a system that connects nine houses in Sapporo to a micro-grid composed of power equipment and a polycrystalline silicon solar cell. In this work, the relationship between the accuracy of output prediction of the solar cell and the operation plan of the micro-grid was clarified. Moreover, we found that operating the micro-grid according to the plan derived with PAS was far superior, in terms of equipment hours of operation, to that using past average weather data.

  18. An Efficient Means of Adaptive Refinement Within Systems of Overset Grids

    NASA Technical Reports Server (NTRS)

    Meakin, Robert L.

    1996-01-01

    An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.

  19. Adaptive learning algorithms for vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Ward, John K.; Behrens, Sam

    2008-06-01

    By scavenging energy from their local environment, portable electronic devices such as MEMS devices, mobile phones, radios and wireless sensors can achieve greater run times with potentially lower weight. Vibration energy harvesting is one such approach where energy from parasitic vibrations can be converted into electrical energy through the use of piezoelectric and electromagnetic transducers. Parasitic vibrations come from a range of sources such as human movement, wind, seismic forces and traffic. Existing approaches to vibration energy harvesting typically utilize a rectifier circuit, which is tuned to the resonant frequency of the harvesting structure and the dominant frequency of vibration. We have developed a novel approach to vibration energy harvesting, including adaptation to non-periodic vibrations so as to extract the maximum amount of vibration energy available. Experimental results of an experimental apparatus using an off-the-shelf transducer (i.e. speaker coil) show mechanical vibration to electrical energy conversion efficiencies of 27-34%.

  20. Adaptive NUC algorithm for uncooled IRFPA based on neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Ziji; Jiang, Yadong; Lv, Jian; Zhu, Hongbin

    2010-10-01

    With developments in uncooled infrared plane array (UFPA) technology, many new advanced uncooled infrared sensors are used in defensive weapons, scientific research, industry and commercial applications. A major difference in imaging techniques between infrared IRFPA imaging system and a visible CCD camera is that, IRFPA need nonuniformity correction and dead pixel compensation, we usually called it infrared image pre-processing. Two-point or multi-point correction algorithms based on calibration commonly used may correct the non-uniformity of IRFPAs, but they are limited by pixel linearity and instability. Therefore, adaptive non-uniformity correction techniques are developed. Two of these adaptive non-uniformity correction algorithms are mostly discussed, one is based on temporal high-pass filter, and another is based on neural network. In this paper, a new NUC algorithm based on improved neural networks is introduced, and involves the compare result between improved neural networks and other adaptive correction techniques. A lot of different will discussed in different angle, like correction effects, calculation efficiency, hardware implementation and so on. According to the result and discussion, it could be concluding that the adaptive algorithm offers improved performance compared to traditional calibration mode techniques. This new algorithm not only provides better sensitivity, but also increases the system dynamic range. As the sensor application expended, it will be very useful in future infrared imaging systems.

  1. Extended TA Algorithm for Adapting a Situation Ontology

    NASA Astrophysics Data System (ADS)

    Zweigle, Oliver; Häussermann, Kai; Käppeler, Uwe-Philipp; Levi, Paul

    In this work we introduce an improved version of a learning algorithm for the automatic adaption of a situation ontology (TAA) [1] which extends the basic principle of the learning algorithm. The approach bases on the assumption of uncertain data and includes elements from the domain of Bayesian Networks and Machine Learning. It is embedded into the cluster of excellence Nexus at the University of Stuttgart which has the aim to build a distributed context aware system for sharing context data.

  2. An adaptive algorithm for modifying hyperellipsoidal decision surfaces

    SciTech Connect

    Kelly, P.M.; Hush, D.R.; White, J.M.

    1992-05-01

    The LVQ algorithm is a common method which allows a set of reference vectors for a distance classifier to adapt to a given training set. We have developed a similar learning algorithm, LVQ-MM, which manipulates hyperellipsoidal cluster boundaries as opposed to reference vectors. Regions of the input feature space are first enclosed by ellipsoidal decision boundaries, and then these boundaries are iteratively modified to reduce classification error. Results obtained by classifying the Iris data set are provided.

  3. An adaptive algorithm for modifying hyperellipsoidal decision surfaces

    SciTech Connect

    Kelly, P.M.; Hush, D.R. . Dept. of Electrical and Computer Engineering); White, J.M. )

    1992-01-01

    The LVQ algorithm is a common method which allows a set of reference vectors for a distance classifier to adapt to a given training set. We have developed a similar learning algorithm, LVQ-MM, which manipulates hyperellipsoidal cluster boundaries as opposed to reference vectors. Regions of the input feature space are first enclosed by ellipsoidal decision boundaries, and then these boundaries are iteratively modified to reduce classification error. Results obtained by classifying the Iris data set are provided.

  4. An efficient algorithm for mapping imaging data to 3D unstructured grids in computational biomechanics.

    PubMed

    Einstein, Daniel R; Kuprat, Andrew P; Jiao, Xiangmin; Carson, James P; Einstein, David M; Jacob, Richard E; Corley, Richard A

    2013-01-01

    Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging-based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: (i) the mapping of MRI diffusion tensor data to an unstructured ventricular grid; (ii) the mapping of serial cyrosection histology data to an unstructured mouse brain grid; and (iii) the mapping of computed tomography-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case. PMID:23293066

  5. An Efficient Algorithm for Mapping Imaging Data to 3D Unstructured Grids in Computational Biomechanics

    SciTech Connect

    Einstein, Daniel R.; Kuprat, Andrew P.; Jiao, Xiangmin; Carson, James P.; Einstein, David M.; Corley, Richard A.; Jacob, Rick E.

    2013-01-01

    Geometries for organ scale and multiscale simulations of organ function are now routinely derived from imaging data. However, medical images may also contain spatially heterogeneous information other than geometry that are relevant to such simulations either as initial conditions or in the form of model parameters. In this manuscript, we present an algorithm for the efficient and robust mapping of such data to imaging based unstructured polyhedral grids in parallel. We then illustrate the application of our mapping algorithm to three different mapping problems: 1) the mapping of MRI diffusion tensor data to an unstuctured ventricular grid; 2) the mapping of serial cyro-section histology data to an unstructured mouse brain grid; and 3) the mapping of CT-derived volumetric strain data to an unstructured multiscale lung grid. Execution times and parallel performance are reported for each case.

  6. A grid generating algorithm for simulating a fluctuating water table boundary in heterogeneous unconfined aquifers

    NASA Astrophysics Data System (ADS)

    Crowe, A. S.; Shikaze, S. G.; Schwartz, F. W.

    An algorithm is presented for generating finite element grids that can be used to calculate the position of a fluctuating water table and the formation of seepage faces within a heterogeneous unconfined aquifer. Our approach overcomes limitations with existing techniques by allowing the water table to rise or decline through hydrostratigraphic boundaries yet maintains numerical and conceptual accuracy with respect to hydrostratigraphic geometry. The algorithm involves (1) limited stretching or shrinking of elements along the water table if the change in the position of the water table is small with respect to the vertical grid spacing, and (2) the addition or removal of nodes and elements in the finite element mesh along the water table as the change becomes large with respect to the vertical grid spacing. This technique is applicable to any 2-D or 3-D finite element code that contains an automatic finite-element grid generator.

  7. Multi-agent coordination algorithms for control of distributed energy resources in smart grids

    NASA Astrophysics Data System (ADS)

    Cortes, Andres

    Sustainable energy is a top-priority for researchers these days, since electricity and transportation are pillars of modern society. Integration of clean energy technologies such as wind, solar, and plug-in electric vehicles (PEVs), is a major engineering challenge in operation and management of power systems. This is due to the uncertain nature of renewable energy technologies and the large amount of extra load that PEVs would add to the power grid. Given the networked structure of a power system, multi-agent control and optimization strategies are natural approaches to address the various problems of interest for the safe and reliable operation of the power grid. The distributed computation in multi-agent algorithms addresses three problems at the same time: i) it allows for the handling of problems with millions of variables that a single processor cannot compute, ii) it allows certain independence and privacy to electricity customers by not requiring any usage information, and iii) it is robust to localized failures in the communication network, being able to solve problems by simply neglecting the failing section of the system. We propose various algorithms to coordinate storage, generation, and demand resources in a power grid using multi-agent computation and decentralized decision making. First, we introduce a hierarchical vehicle-one-grid (V1G) algorithm for coordination of PEVs under usage constraints, where energy only flows from the grid in to the batteries of PEVs. We then present a hierarchical vehicle-to-grid (V2G) algorithm for PEV coordination that takes into consideration line capacity constraints in the distribution grid, and where energy flows both ways, from the grid in to the batteries, and from the batteries to the grid. Next, we develop a greedy-like hierarchical algorithm for management of demand response events with on/off loads. Finally, we introduce distributed algorithms for the optimal control of distributed energy resources, i

  8. Data-adaptive algorithms for calling alleles in repeat polymorphisms.

    PubMed

    Stoughton, R; Bumgarner, R; Frederick, W J; McIndoe, R A

    1997-01-01

    Data-adaptive algorithms are presented for separating overlapping signatures of heterozygotic allele pairs in electrophoresis data. Application is demonstrated for human microsatellite CA-repeat polymorphisms in LiCor 4000 and ABI 373 data. The algorithms allow overlapping alleles to be called correctly in almost every case where a trained observer could do so, and provide a fast automated objective alternative to human reading of the gels. The algorithm also supplies an indication of confidence level which can be used to flag marginal cases for verification by eye, or as input to later stages of statistical analysis. PMID:9059812

  9. A Hierarchical and Distributed Approach for Mapping Large Applications to Heterogeneous Grids using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Sanyal, Soumya; Jain, Amit; Das, Sajal K.; Biswas, Rupak

    2003-01-01

    In this paper, we propose a distributed approach for mapping a single large application to a heterogeneous grid environment. To minimize the execution time of the parallel application, we distribute the mapping overhead to the available nodes of the grid. This approach not only provides a fast mapping of tasks to resources but is also scalable. We adopt a hierarchical grid model and accomplish the job of mapping tasks to this topology using a scheduler tree. Results show that our three-phase algorithm provides high quality mappings, and is fast and scalable.

  10. Adaptive clustering algorithm for community detection in complex networks.

    PubMed

    Ye, Zhenqing; Hu, Songnian; Yu, Jun

    2008-10-01

    Community structure is common in various real-world networks; methods or algorithms for detecting such communities in complex networks have attracted great attention in recent years. We introduced a different adaptive clustering algorithm capable of extracting modules from complex networks with considerable accuracy and robustness. In this approach, each node in a network acts as an autonomous agent demonstrating flocking behavior where vertices always travel toward their preferable neighboring groups. An optimal modular structure can emerge from a collection of these active nodes during a self-organization process where vertices constantly regroup. In addition, we show that our algorithm appears advantageous over other competing methods (e.g., the Newman-fast algorithm) through intensive evaluation. The applications in three real-world networks demonstrate the superiority of our algorithm to find communities that are parallel with the appropriate organization in reality. PMID:18999501

  11. COLLABORATIVE RESEARCH: CONTINUOUS DYNAMIC GRID ADAPTATION IN A GLOBAL ATMOSPHERIC MODEL: APPLICATION AND REFINEMENT

    SciTech Connect

    Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.

    2012-05-08

    This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of

  12. Game and Balance Multicast Architecture Algorithms for Sensor Grid

    PubMed Central

    Fan, Qingfeng; Wu, Qiongli; Magoulés, Frèdèric; Xiong, Naixue; Vasilakos, Athanasios V.; He, Yanxiang

    2009-01-01

    We propose a scheme to attain shorter multicast delay and higher efficiency in the data transfer of sensor grid. Our scheme, in one cluster, seeks the central node, calculates the space and the data weight vectors. Then we try to find a new vector composed by linear combination of the two old ones. We use the equal correlation coefficient between the new and old vectors to find the point of game and balance of the space and data factorsbuild a binary simple equation, seek linear parameters, and generate a least weight path tree. We handled the issue from a quantitative way instead of a qualitative way. Based on this idea, we considered the scheme from both the space and data factor, then we built the mathematic model, set up game and balance relationship and finally resolved the linear indexes, according to which we improved the transmission efficiency of sensor grid. Extended simulation results indicate that our scheme attains less average multicast delay and number of links used compared with other well-known existing schemes. PMID:22399992

  13. The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

    PubMed

    Li, Kan; Príncipe, José C

    2016-02-01

    In this paper, we present a novel kernel adaptive recurrent filtering algorithm based on the autoregressive-moving-average (ARMA) model, which is trained with recurrent stochastic gradient descent in the reproducing kernel Hilbert spaces. This kernelized recurrent system, the kernel adaptive ARMA (KAARMA) algorithm, brings together the theories of adaptive signal processing and recurrent neural networks (RNNs), extending the current theory of kernel adaptive filtering (KAF) using the representer theorem to include feedback. Compared with classical feedforward KAF methods, the KAARMA algorithm provides general nonlinear solutions for complex dynamical systems in a state-space representation, with a deferred teacher signal, by propagating forward the hidden states. We demonstrate its capabilities to provide exact solutions with compact structures by solving a set of benchmark nondeterministic polynomial-complete problems involving grammatical inference. Simulation results show that the KAARMA algorithm outperforms equivalent input-space recurrent architectures using first- and second-order RNNs, demonstrating its potential as an effective learning solution for the identification and synthesis of deterministic finite automata. PMID:25935049

  14. An Adaptive Tradeoff Algorithm for Multi-issue SLA Negotiation

    NASA Astrophysics Data System (ADS)

    Son, Seokho; Sim, Kwang Mong

    Since participants in a Cloud may be independent bodies, mechanisms are necessary for resolving different preferences in leasing Cloud services. Whereas there are currently mechanisms that support service-level agreement negotiation, there is little or no negotiation support for concurrent price and timeslot for Cloud service reservations. For the concurrent price and timeslot negotiation, a tradeoff algorithm to generate and evaluate a proposal which consists of price and timeslot proposal is necessary. The contribution of this work is thus to design an adaptive tradeoff algorithm for multi-issue negotiation mechanism. The tradeoff algorithm referred to as "adaptive burst mode" is especially designed to increase negotiation speed and total utility and to reduce computational load by adaptively generating concurrent set of proposals. The empirical results obtained from simulations carried out using a testbed suggest that due to the concurrent price and timeslot negotiation mechanism with adaptive tradeoff algorithm: 1) both agents achieve the best performance in terms of negotiation speed and utility; 2) the number of evaluations of each proposal is comparatively lower than previous scheme (burst-N).

  15. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  16. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations.

    PubMed

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  17. A Modified Rife Algorithm for Off-Grid DOA Estimation Based on Sparse Representations

    PubMed Central

    Chen, Tao; Wu, Huanxin; Guo, Limin; Liu, Lutao

    2015-01-01

    In this paper we address the problem of off-grid direction of arrival (DOA) estimation based on sparse representations in the situation of multiple measurement vectors (MMV). A novel sparse DOA estimation method which changes MMV problem to SMV is proposed. This method uses sparse representations based on weighted eigenvectors (SRBWEV) to deal with the MMV problem. MMV problem can be changed to single measurement vector (SMV) problem by using the linear combination of eigenvectors of array covariance matrix in signal subspace as a new SMV for sparse solution calculation. So the complexity of this proposed algorithm is smaller than other DOA estimation algorithms of MMV. Meanwhile, it can overcome the limitation of the conventional sparsity-based DOA estimation approaches that the unknown directions belong to a predefined discrete angular grid, so it can further improve the DOA estimation accuracy. The modified Rife algorithm for DOA estimation (MRife-DOA) is simulated based on SRBWEV algorithm. In this proposed algorithm, the largest and sub-largest inner products between the measurement vector or its residual and the atoms in the dictionary are utilized to further modify DOA estimation according to the principle of Rife algorithm and the basic idea of coarse-to-fine estimation. Finally, simulation experiments show that the proposed algorithm is effective and can reduce the DOA estimation error caused by grid effect with lower complexity. PMID:26610521

  18. An Adaptive Immune Genetic Algorithm for Edge Detection

    NASA Astrophysics Data System (ADS)

    Li, Ying; Bai, Bendu; Zhang, Yanning

    An adaptive immune genetic algorithm (AIGA) based on cost minimization technique method for edge detection is proposed. The proposed AIGA recommends the use of adaptive probabilities of crossover, mutation and immune operation, and a geometric annealing schedule in immune operator to realize the twin goals of maintaining diversity in the population and sustaining the fast convergence rate in solving the complex problems such as edge detection. Furthermore, AIGA can effectively exploit some prior knowledge and information of the local edge structure in the edge image to make vaccines, which results in much better local search ability of AIGA than that of the canonical genetic algorithm. Experimental results on gray-scale images show the proposed algorithm perform well in terms of quality of the final edge image, rate of convergence and robustness to noise.

  19. Flight data processing with the F-8 adaptive algorithm

    NASA Technical Reports Server (NTRS)

    Hartmann, G.; Stein, G.; Petersen, K.

    1977-01-01

    An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described

  20. A new adaptive GMRES algorithm for achieving high accuracy

    SciTech Connect

    Sosonkina, M.; Watson, L.T.; Kapania, R.K.; Walker, H.F.

    1996-12-31

    GMRES(k) is widely used for solving nonsymmetric linear systems. However, it is inadequate either when it converges only for k close to the problem size or when numerical error in the modified Gram-Schmidt process used in the GMRES orthogonalization phase dramatically affects the algorithm performance. An adaptive version of GMRES (k) which tunes the restart value k based on criteria estimating the GMRES convergence rate for the given problem is proposed here. The essence of the adaptive GMRES strategy is to adapt the parameter k to the problem, similar in spirit to how a variable order ODE algorithm tunes the order k. With FORTRAN 90, which provides pointers and dynamic memory management, dealing with the variable storage requirements implied by varying k is not too difficult. The parameter k can be both increased and decreased-an increase-only strategy is described next followed by pseudocode.

  1. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  2. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  3. Developing Information Power Grid Based Algorithms and Software

    NASA Technical Reports Server (NTRS)

    Dongarra, Jack

    1998-01-01

    This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.

  4. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  5. An optimal and efficient new gridding algorithm using singular value decomposition.

    PubMed

    Rosenfeld, D

    1998-07-01

    The problem of handling data that falls on a nonequally spaced grid occurs in numerous fields of science, ranging from radio-astronomy to medical imaging. In MRI, this condition arises when sampling under time-varying gradients in sequences such as echo-planar imaging (EPI), spiral scans, or radial scans. The technique currently being used to interpolate the nonuniform samples onto a Cartesian grid is called the gridding algorithm. In this paper, a new method for uniform resampling is presented that is both optimal and efficient. It is first shown that the resampling problem can be formulated as a problem of solving a set of linear equations Ax = b, where x and b are vectors of the uniform and nonuniform samples, respectively, and A is a matrix of the sinc interpolation coefficients. In a procedure called Uniform Re-Sampling (URS), this set of equations is given an optimal solution using the pseudoinverse matrix which is computed using singular value decomposition (SVD). In large problems, this solution is neither practical nor computationally efficient. Another method is presented, called the Block Uniform Re-Sampling (BURS) algorithm, which decomposes the problem into solving a small set of linear equations for each uniform grid point. These equations are a subset of the original equations Ax = b and are once again solved using SVD. The final result is both optimal and computationally efficient. The results of the new method are compared with those obtained using the conventional gridding algorithm via simulations. PMID:9660548

  6. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  7. A Lagrangian-Eulerian finite element method with adaptive gridding for advection-dispersion problems

    SciTech Connect

    Ijiri, Y.; Karasaki, K.

    1994-02-01

    In the present paper, a Lagrangian-Eulerian finite element method with adaptive gridding for solving advection-dispersion equations is described. The code creates new grid points in the vicinity of sharp fronts at every time step in order to reduce numerical dispersion. The code yields quite accurate solutions for a wide range of mesh Peclet numbers and for mesh Courant numbers well in excess of 1.

  8. Unstructured Grid Adaptation: Status, Potential Impacts, and Recommended Investments Toward CFD Vision 2030

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Krakos, Joshua A.; Michal, Todd; Loseille, Adrien; Alonso, Juan J.

    2016-01-01

    Unstructured grid adaptation is a powerful tool to control discretization error for Computational Fluid Dynamics (CFD). It has enabled key increases in the accuracy, automation, and capacity of some fluid simulation applications. Slotnick et al. provides a number of case studies in the CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences to illustrate the current state of CFD capability and capacity. The authors forecast the potential impact of emerging High Performance Computing (HPC) environments forecast in the year 2030 and identify that mesh generation and adaptivity continue to be significant bottlenecks in the CFD work flow. These bottlenecks may persist because very little government investment has been targeted in these areas. To motivate investment, the impacts of improved grid adaptation technologies are identified. The CFD Vision 2030 Study roadmap and anticipated capabilities in complementary disciplines are quoted to provide context for the progress made in grid adaptation in the past fifteen years, current status, and a forecast for the next fifteen years with recommended investments. These investments are specific to mesh adaptation and impact other aspects of the CFD process. Finally, a strategy is identified to diffuse grid adaptation technology into production CFD work flows.

  9. Adaptive Flocking of Robot Swarms: Algorithms and Properties

    NASA Astrophysics Data System (ADS)

    Lee, Geunho; Chong, Nak Young

    This paper presents a distributed approach for adaptive flocking of swarms of mobile robots that enables to navigate autonomously in complex environments populated with obstacles. Based on the observation of the swimming behavior of a school of fish, we propose an integrated algorithm that allows a swarm of robots to navigate in a coordinated manner, split into multiple swarms, or merge with other swarms according to the environment conditions. We prove the convergence of the proposed algorithm using Lyapunov stability theory. We also verify the effectiveness of the algorithm through extensive simulations, where a swarm of robots repeats the process of splitting and merging while passing around multiple stationary and moving obstacles. The simulation results show that the proposed algorithm is scalable, and robust to variations in the sensing capability of individual robots.

  10. Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme

    NASA Astrophysics Data System (ADS)

    Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi

    2004-02-01

    A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

  11. Grid coupling mechanism in the semi-implicit adaptive Multi-Level Multi-Domain method

    NASA Astrophysics Data System (ADS)

    Innocenti, M. E.; Tronci, C.; Markidis, S.; Lapenta, G.

    2016-05-01

    The Multi-Level Multi-Domain (MLMD) method is a semi-implicit adaptive method for Particle-In-Cell plasma simulations. It has been demonstrated in the past in simulations of Maxwellian plasmas, electrostatic and electromagnetic instabilities, plasma expansion in vacuum, magnetic reconnection [1, 2, 3]. In multiple occasions, it has been commented on the coupling between the coarse and the refined grid solutions. The coupling mechanism itself, however, has never been explored in depth. Here, we investigate the theoretical bases of grid coupling in the MLMD system. We obtain an evolution law for the electric field solution in the overlap area of the MLMD system which highlights a dependance on the densities and currents from both the coarse and the refined grid, rather than from the coarse grid alone: grid coupling is obtained via densities and currents.

  12. Error-measure for anisotropic grid-adaptation in turbulence-resolving simulations

    NASA Astrophysics Data System (ADS)

    Toosi, Siavash; Larsson, Johan

    2015-11-01

    Grid-adaptation requires an error-measure that identifies where the grid should be refined. In the case of turbulence-resolving simulations (DES, LES, DNS), a simple error-measure is the small-scale resolved energy, which scales with both the modeled subgrid-stresses and the numerical truncation errors in many situations. Since this is a scalar measure, it does not carry any information on the anisotropy of the optimal grid-refinement. The purpose of this work is to introduce a new error-measure for turbulence-resolving simulations that is capable of predicting nearly-optimal anisotropic grids. Turbulent channel flow at Reτ ~ 300 is used to assess the performance of the proposed error-measure. The formulation is geometrically general, applicable to any type of unstructured grid.

  13. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  14. A self-adaptive-grid method with application to airfoil flow

    NASA Technical Reports Server (NTRS)

    Nakahashi, K.; Deiwert, G. S.

    1985-01-01

    A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.

  15. Adaptive sensor array algorithm for structural health monitoring of helmet

    NASA Astrophysics Data System (ADS)

    Zou, Xiaotian; Tian, Ye; Wu, Nan; Sun, Kai; Wang, Xingwei

    2011-04-01

    The adaptive neural network is a standard technique used in nonlinear system estimation and learning applications for dynamic models. In this paper, we introduced an adaptive sensor fusion algorithm for a helmet structure health monitoring system. The helmet structure health monitoring system is used to study the effects of ballistic/blast events on the helmet and human skull. Installed inside the helmet system, there is an optical fiber pressure sensors array. After implementing the adaptive estimation algorithm into helmet system, a dynamic model for the sensor array has been developed. The dynamic response characteristics of the sensor network are estimated from the pressure data by applying an adaptive control algorithm using artificial neural network. With the estimated parameters and position data from the dynamic model, the pressure distribution of the whole helmet can be calculated following the Bazier Surface interpolation method. The distribution pattern inside the helmet will be very helpful for improving helmet design to provide better protection to soldiers from head injuries.

  16. A Solution Adaptive Structured/Unstructured Overset Grid Flow Solver with Applications to Helicopter Rotor Flows

    NASA Technical Reports Server (NTRS)

    Duque, Earl P. N.; Biswas, Rupak; Strawn, Roger C.

    1995-01-01

    This paper summarizes a method that solves both the three dimensional thin-layer Navier-Stokes equations and the Euler equations using overset structured and solution adaptive unstructured grids with applications to helicopter rotor flowfields. The overset structured grids use an implicit finite-difference method to solve the thin-layer Navier-Stokes/Euler equations while the unstructured grid uses an explicit finite-volume method to solve the Euler equations. Solutions on a helicopter rotor in hover show the ability to accurately convect the rotor wake. However, isotropic subdivision of the tetrahedral mesh rapidly increases the overall problem size.

  17. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    PubMed

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  18. Adaptive grid finite element model of the tokamak scrapeoff layer

    SciTech Connect

    Kuprat, A.P.; Glasser, A.H.

    1995-07-01

    The authors discuss unstructured grids for application to transport in the tokamak edge SOL. They have developed a new metric with which to judge element elongation and resolution requirements. Using this method, the authors apply a standard moving finite element technique to advance the SOL equations while inserting/deleting dynamically nodes that violate an elongation criterion. In a tokamak plasma, this method achieves a more uniform accuracy, and results in highly stretched triangular finite elements, except near separatrix X-point where transport is more isotropic.

  19. Minimizing the grid-resolution dependence of flow-routing algorithms for geomorphic applications

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.

    2010-10-01

    The results of flow-routing methods currently used in the geomorphic literature depend on grid resolution. This poses a problem for landscape evolution models, which must be independent of grid resolution to the greatest extent possible. Here I illustrate a refinement of currently-used flow-routing algorithms that yields unit contributing areas (i.e. contributing areas per unit width of flow) with minimal grid-resolution effects. I illustrate the application of this method in idealized topography, in high-resolution Digital Elevation Models (DEMs) of real-world topography, and by integration into a landscape evolution model for ridge-and-valley topography. The landscape evolution model produces grid-resolution-independent results in a more straightforward way than previous models for this type of landscape.

  20. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  1. Efficient implementation of the adaptive scale pixel decomposition algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Bhatnagar, S.; Rau, U.; Zhang, M.

    2016-08-01

    Context. Most popular algorithms in use to remove the effects of a telescope's point spread function (PSF) in radio astronomy are variants of the CLEAN algorithm. Most of these algorithms model the sky brightness using the delta-function basis, which results in undesired artefacts when used to image extended emission. The adaptive scale pixel decomposition (Asp-Clean) algorithm models the sky brightness on a scale-sensitive basis and thus gives a significantly better imaging performance when imaging fields that contain both resolved and unresolved emission. Aims: However, the runtime cost of Asp-Clean is higher than that of scale-insensitive algorithms. In this paper, we identify the most expensive step in the original Asp-Clean algorithm and present an efficient implementation of it, which significantly reduces the computational cost while keeping the imaging performance comparable to the original algorithm. The PSF sidelobe levels of modern wide-band telescopes are significantly reduced, allowing us to make approximations to reduce the computational cost, which in turn allows for the deconvolution of larger images on reasonable timescales. Methods: As in the original algorithm, scales in the image are estimated through function fitting. Here we introduce an analytical method to model extended emission, and a modified method for estimating the initial values used for the fitting procedure, which ultimately leads to a lower computational cost. Results: The new implementation was tested with simulated EVLA data and the imaging performance compared well with the original Asp-Clean algorithm. Tests show that the current algorithm can recover features at different scales with lower computational cost.

  2. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  3. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  4. A Domain-Decomposed Multi-Level Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)

    1998-01-01

    The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.

  5. An efficient Bayesian inference approach to inverse problems based on an adaptive sparse grid collocation method

    NASA Astrophysics Data System (ADS)

    Ma, Xiang; Zabaras, Nicholas

    2009-03-01

    A new approach to modeling inverse problems using a Bayesian inference method is introduced. The Bayesian approach considers the unknown parameters as random variables and seeks the probabilistic distribution of the unknowns. By introducing the concept of the stochastic prior state space to the Bayesian formulation, we reformulate the deterministic forward problem as a stochastic one. The adaptive hierarchical sparse grid collocation (ASGC) method is used for constructing an interpolant to the solution of the forward model in this prior space which is large enough to capture all the variability/uncertainty in the posterior distribution of the unknown parameters. This solution can be considered as a function of the random unknowns and serves as a stochastic surrogate model for the likelihood calculation. Hierarchical Bayesian formulation is used to derive the posterior probability density function (PPDF). The spatial model is represented as a convolution of a smooth kernel and a Markov random field. The state space of the PPDF is explored using Markov chain Monte Carlo algorithms to obtain statistics of the unknowns. The likelihood calculation is performed by directly sampling the approximate stochastic solution obtained through the ASGC method. The technique is assessed on two nonlinear inverse problems: source inversion and permeability estimation in flow through porous media.

  6. Landsat ecosystem disturbance adaptive processing system (LEDAPS) algorithm description

    USGS Publications Warehouse

    Schmidt, Gail; Jenkerson, Calli; Masek, Jeffrey; Vermote, Eric; Gao, Feng

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) software was originally developed by the National Aeronautics and Space Administration–Goddard Space Flight Center and the University of Maryland to produce top-of-atmosphere reflectance from LandsatThematic Mapper and Enhanced Thematic Mapper Plus Level 1 digital numbers and to apply atmospheric corrections to generate a surface-reflectance product.The U.S. Geological Survey (USGS) has adopted the LEDAPS algorithm for producing the Landsat Surface Reflectance Climate Data Record.This report discusses the LEDAPS algorithm, which was implemented by the USGS.

  7. Adaptive Grid Based Localized Learning for Multidimensional Data

    ERIC Educational Resources Information Center

    Saini, Sheetal

    2012-01-01

    Rapid advances in data-rich domains of science, technology, and business has amplified the computational challenges of "Big Data" synthesis necessary to slow the widening gap between the rate at which the data is being collected and analyzed for knowledge. This has led to the renewed need for efficient and accurate algorithms, framework,…

  8. Adapting a commercial power system simulator for smart grid based system study and vulnerability assessment

    NASA Astrophysics Data System (ADS)

    Navaratne, Uditha Sudheera

    The smart grid is the future of the power grid. Smart meters and the associated network play a major role in the distributed system of the smart grid. Advance Metering Infrastructure (AMI) can enhance the reliability of the grid, generate efficient energy management opportunities and many innovations around the future smart grid. These innovations involve intense research not only on the AMI network itself but as also on the influence an AMI network can have upon the rest of the power grid. This research describes a smart meter testbed with hardware in loop that can facilitate future research in an AMI network. The smart meters in the testbed were developed such that their functionality can be customized to simulate any given scenario such as integrating new hardware components into a smart meter or developing new encryption algorithms in firmware. These smart meters were integrated into the power system simulator to simulate the power flow variation in the power grid on different AMI activities. Each smart meter in the network also provides a communication interface to the home area network. This research delivers a testbed for emulating the AMI activities and monitoring their effect on the smart grid.

  9. A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  10. A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  11. Adaptive experiments with a multivariate Elo-type algorithm.

    PubMed

    Doebler, Philipp; Alavash, Mohsen; Giessing, Carsten

    2015-06-01

    The present article introduces the multivariate Elo-type algorithm (META), which is inspired by the Elo rating system, a tool for the measurement of the performance of chess players. The META is intended for adaptive experiments with correlated traits. The relationship of the META to other existing procedures is explained, and useful variants and modifications are discussed. The META was investigated within three simulation studies. The gain in efficiency of the univariate Elo-type algorithm was compared to standard univariate procedures; the impact of using correlational information in the META was quantified; and the adaptability to learning and fatigue was investigated. Our results show that the META is a powerful tool to efficiently control task performance in a short time period and to assess correlated traits. The R code of the simulations, the implementation of the META in MATLAB, and an example of how to use the META in the context of neuroscience are provided in supplemental materials. PMID:24878597

  12. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  13. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  14. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  15. On some limitations of adaptive feedback measurement algorithm

    NASA Astrophysics Data System (ADS)

    Opalski, Leszek J.

    2015-09-01

    The brilliant idea of Adaptive Feedback Control Systems (AFCS) makes possible creation of highly efficient adaptive systems for estimation, identification and filtering of signals and physical processes. The research problem considered in this paper is: how performance of AFCS changes if some of the assumptions used to formulate iterative estimation algorithm are not fulfilled exactly. To limit the scope of research a particular implementation of the AFCS concept was considered, i.e. an adaptive feedback measurement system (AFMS). The iterative measurement algorithm used was derived under some idealized conditions, notably with perfect knowledge of the system model and Gaussian communication channels. The selected non-idealities of interest are non-zero mean value of noise processes and non-ideal calibration of transmission gain in the forward channel - because they are related to intrinsic non-idealities of analog building blocks, used for the AFMS implementation. The presented original analysis of the iterative measurement algorithm provides quantitative information on speed of convergence and limit behavior. The analysis should be useful for AFCS implementors in the measurement area - since the results are presented in terms of accuracy and precision of iterative measurement process.

  16. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations. PMID:25594982

  17. Adaptive Load-Balancing Algorithms Using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Biswas, Rupak; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In a distributed-computing environment, it is important to ensure that the processor workloads are adequately balanced. Among numerous load-balancing algorithms, a unique approach due to Dam and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three novel SBN-based load-balancing algorithms, and implement them on an SP2. A thorough experimental study with Poisson-distributed synthetic loads demonstrates that these algorithms are very effective in balancing system load while minimizing processor idle time. They also compare favorably with several other existing load-balancing techniques. Additional experiments performed with real data demonstrate that the SBN approach is effective in adaptive computational science and engineering applications where dynamic load balancing is extremely crucial.

  18. A local adaptive discretization algorithm for Smoothed Particle Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Spreng, Fabian; Schnabel, Dirk; Mueller, Alexandra; Eberhard, Peter

    2014-06-01

    In this paper, an extension to the Smoothed Particle Hydrodynamics (SPH) method is proposed that allows for an adaptation of the discretization level of a simulated continuum at runtime. By combining a local adaptive refinement technique with a newly developed coarsening algorithm, one is able to improve the accuracy of the simulation results while reducing the required computational cost at the same time. For this purpose, the number of particles is, on the one hand, adaptively increased in critical areas of a simulation model. Typically, these are areas that show a relatively low particle density and high gradients in stress or temperature. On the other hand, the number of SPH particles is decreased for domains with a high particle density and low gradients. Besides a brief introduction to the basic principle of the SPH discretization method, the extensions to the original formulation providing such a local adaptive refinement and coarsening of the modeled structure are presented in this paper. After having introduced its theoretical background, the applicability of the enhanced formulation, as well as the benefit gained from the adaptive model discretization, is demonstrated in the context of four different simulation scenarios focusing on solid continua. While presenting the results found for these examples, several properties of the proposed adaptive technique are discussed, e.g. the conservation of momentum as well as the existing correlation between the chosen refinement and coarsening patterns and the observed quality of the results.

  19. Adaptive Firefly Algorithm: Parameter Analysis and its Application

    PubMed Central

    Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithmadaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem — protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  20. Discrete-time minimal control synthesis adaptive algorithm

    NASA Astrophysics Data System (ADS)

    di Bernardo, M.; di Gennaro, F.; Olm, J. M.; Santini, S.

    2010-12-01

    This article proposes a discrete-time Minimal Control Synthesis (MCS) algorithm for a class of single-input single-output discrete-time systems written in controllable canonical form. As it happens with the continuous-time MCS strategy, the algorithm arises from the family of hyperstability-based discrete-time model reference adaptive controllers introduced in (Landau, Y. (1979), Adaptive Control: The Model Reference Approach, New York: Marcel Dekker, Inc.) and is able to ensure tracking of the states of a given reference model with minimal knowledge about the plant. The control design shows robustness to parameter uncertainties, slow parameter variation and matched disturbances. Furthermore, it is proved that the proposed discrete-time MCS algorithm can be used to control discretised continuous-time plants with the same performance features. Contrary to previous discrete-time implementations of the continuous-time MCS algorithm, here a formal proof of asymptotic stability is given for generic n-dimensional plants in controllable canonical form. The theoretical approach is validated by means of simulation results.

  1. Adaptive firefly algorithm: parameter analysis and its application.

    PubMed

    Cheung, Ngaam J; Ding, Xue-Ming; Shen, Hong-Bin

    2014-01-01

    As a nature-inspired search algorithm, firefly algorithm (FA) has several control parameters, which may have great effects on its performance. In this study, we investigate the parameter selection and adaptation strategies in a modified firefly algorithm - adaptive firefly algorithm (AdaFa). There are three strategies in AdaFa including (1) a distance-based light absorption coefficient; (2) a gray coefficient enhancing fireflies to share difference information from attractive ones efficiently; and (3) five different dynamic strategies for the randomization parameter. Promising selections of parameters in the strategies are analyzed to guarantee the efficient performance of AdaFa. AdaFa is validated over widely used benchmark functions, and the numerical experiments and statistical tests yield useful conclusions on the strategies and the parameter selections affecting the performance of AdaFa. When applied to the real-world problem - protein tertiary structure prediction, the results demonstrated improved variants can rebuild the tertiary structure with the average root mean square deviation less than 0.4Å and 1.5Å from the native constrains with noise free and 10% Gaussian white noise. PMID:25397812

  2. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  3. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  4. Application of a solution adaptive grid scheme, SAGE, to complex three-dimensional flows

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1991-01-01

    A new three-dimensional (3D) adaptive grid code based on the algebraic, solution-adaptive scheme of Nakahashi and Deiwert is developed and applied to a variety of problems. The new computer code, SAGE, is an extension of the same-named two-dimensional (2D) solution-adaptive program that has already proven to be a powerful tool in computational fluid dynamics applications. The new code has been applied to a range of complex three-dimensional, supersonic and hypersonic flows. Examples discussed are a tandem-slot fuel injector, the hypersonic forebody of the Aeroassist Flight Experiment (AFE), the 3D base flow behind the AFE, the supersonic flow around a 3D swept ramp and a generic, hypersonic, 3D nozzle-plume flow. The associated adapted grids and the solution enhancements resulting from the grid adaption are presented for these cases. Three-dimensional adaption is more complex than its 2D counterpart, and the complexities unique to the 3D problems are discussed.

  5. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  6. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  7. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  8. White Light Schlieren Optics Using Bacteriorhodopsin as an Adaptive Image Grid

    NASA Technical Reports Server (NTRS)

    Peale, Robert; Ruffin, Boh; Donahue, Jeff; Barrett, Carolyn

    1996-01-01

    A Schlieren apparatus using a bacteriorhodopsin film as an adaptive image grid with white light illumination is demonstrated for the first time. The time dependent spectral properties of the film are characterized. Potential applications include a single-ended Schlieren system for leak detection.

  9. Algebraic grid adaptation method using non-uniform rational B-spline surface modeling

    NASA Technical Reports Server (NTRS)

    Yang, Jiann-Cherng; Soni, B. K.

    1992-01-01

    An algebraic adaptive grid system based on equidistribution law and utilized by the Non-Uniform Rational B-Spline (NURBS) surface for redistribution is presented. A weight function, utilizing a properly weighted boolean sum of various flow field characteristics is developed. Computational examples are presented to demonstrate the success of this technique.

  10. Generalized Monge-Kantorovich optimization for grid generation and adaptation in LP

    SciTech Connect

    Delzanno, G L; Finn, J M

    2009-01-01

    The Monge-Kantorovich grid generation and adaptation scheme of is generalized from a variational principle based on L{sub 2} to a variational principle based on L{sub p}. A generalized Monge-Ampere (MA) equation is derived and its properties are discussed. Results for p > 1 are obtained and compared in terms of the quality of the resulting grid. We conclude that for the grid generation application, the formulation based on L{sub p} for p close to unity leads to serious problems associated with the boundary. Results for 1.5 {approx}< p {approx}< 2.5 are quite good, but there is a fairly narrow range around p = 2 where the results are close to optimal with respect to grid distortion. Furthermore, the Newton-Krylov methods used to solve the generalized MA equation perform best for p = 2.

  11. GENOCOP Algorithm and Hierarchical Grid Transformation for Image Warping of Two-Dimensional Gel Electrophoretic Maps.

    PubMed

    Robotti, Elisa; Marengo, Emilio; Demartini, Marco

    2016-01-01

    Hierarchical grid transformation is a powerful hierarchical approach to 2-D map warping, able to model both global and local deformations. The algorithm can be stopped when a desired degree of accuracy in the images alignment is obtained. The deformed image is warped and aligned to the target image using a grid where the number of nodes increases in each step of the algorithm. The numerical optimization of the position of the nodes of the grid can be efficiently solved by genetic algorithms, ensuring the achievement of the optimal position of the nodes with a low computational cost with respect to other methods. Here, the optimization of the position of the nodes is carried out by GENOCOP (genetic algorithm for numerical optimization of constrained problems), refined by the following conjugate gradient optimization step. The modeling of the warped space is then achieved by a spline model where some constraints are introduced in the choice of the nodes that are moved. The whole procedure can be intended as an evolutionary method that models the deformation of the gel map at different levels of detail. PMID:26611415

  12. Efficient solution of the Euler and Navier-Stokes equations with a vectorized multiple-grid algorithm

    NASA Technical Reports Server (NTRS)

    Chima, R. V.; Johnson, G. M.

    1983-01-01

    A multiple-grid algorithm for use in efficiently obtaining steady solutions to the Euler and Navier-Stokes equations is presented. The convergence of the explicit MacCormack algorithm on a fine grid is accelerated by propagating transients from the domain using a sequence of successively coarser grids. Both the fine and coarse grid schemes are readily vectorizable. The combination of multiple-gridding and vectorization results in substantially reduced computational times for the numerical solution of a wide range of flow problems. Results are presented for subsonic, transonic, and supersonic inviscid flows and for subsonic attached and separated laminar viscous flows. Work reduction factors over a scalar, single-grid algorithm range as high as 76.8. Previously announced in STAR as N83-24467

  13. The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.

  14. G/SPLINES: A hybrid of Friedman's Multivariate Adaptive Regression Splines (MARS) algorithm with Holland's genetic algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, David

    1991-01-01

    G/SPLINES are a hybrid of Friedman's Multivariable Adaptive Regression Splines (MARS) algorithm with Holland's Genetic Algorithm. In this hybrid, the incremental search is replaced by a genetic search. The G/SPLINE algorithm exhibits performance comparable to that of the MARS algorithm, requires fewer least squares computations, and allows significantly larger problems to be considered.

  15. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    SciTech Connect

    Jablonowski, Christiane

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  16. Emergent Adaptive Noise Reduction from Communal Cooperation of Sensor Grid

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Jones, Michael G.; Nark, Douglas M.; Lodding, Kenneth N.

    2010-01-01

    In the last decade, the realization of small, inexpensive, and powerful devices with sensors, computers, and wireless communication has promised the development of massive sized sensor networks with dense deployments over large areas capable of high fidelity situational assessments. However, most management models have been based on centralized control and research has concentrated on methods for passing data from sensor devices to the central controller. Most implementations have been small but, as it is not scalable, this methodology is insufficient for massive deployments. Here, a specific application of a large sensor network for adaptive noise reduction demonstrates a new paradigm where communities of sensor/computer devices assess local conditions and make local decisions from which emerges a global behaviour. This approach obviates many of the problems of centralized control as it is not prone to single point of failure and is more scalable, efficient, robust, and fault tolerant

  17. Carving and adaptive drainage enforcement of grid digital elevation models

    NASA Astrophysics Data System (ADS)

    Soille, Pierre; Vogt, Jürgen; Colombo, Roberto

    2003-12-01

    An effective and widely used method for removing spurious pits in digital elevation models consists of filling them until they overflow. However, this method sometimes creates large flat regions which in turn pose a problem for the determination of accurate flow directions. In this study, we propose to suppress each pit by creating a descending path from it to the nearest point having a lower elevation value. This is achieved by carving, i.e., lowering, the terrain elevations along the detected path. Carving paths are identified through a flooding simulation starting from the river outlets. The proposed approach allows for adaptive drainage enforcement whereby river networks coming from other data sources are imposed to the digital elevation model only in places where the automatic river network extraction deviates substantially from the known networks. An improvement to methods for routing flow over flat regions is also introduced. Detailed results are presented over test areas of the Danube basin.

  18. A dynamically adaptive multigrid algorithm for the incompressible Navier-Stokes equations: Validation and model problems

    NASA Technical Reports Server (NTRS)

    Thompson, C. P.; Leaf, G. K.; Vanrosendale, J.

    1991-01-01

    An algorithm is described for the solution of the laminar, incompressible Navier-Stokes equations. The basic algorithm is a multigrid based on a robust, box-based smoothing step. Its most important feature is the incorporation of automatic, dynamic mesh refinement. This algorithm supports generalized simple domains. The program is based on a standard staggered-grid formulation of the Navier-Stokes equations for robustness and efficiency. Special grid transfer operators were introduced at grid interfaces in the multigrid algorithm to ensure discrete mass conservation. Results are presented for three models: the driven-cavity, a backward-facing step, and a sudden expansion/contraction.

  19. Algorithms for the automatic generation of 2-D structured multi-block grids

    NASA Technical Reports Server (NTRS)

    Schoenfeld, Thilo; Weinerfelt, Per; Jenssen, Carl B.

    1995-01-01

    Two different approaches to the fully automatic generation of structured multi-block grids in two dimensions are presented. The work aims to simplify the user interactivity necessary for the definition of a multiple block grid topology. The first approach is based on an advancing front method commonly used for the generation of unstructured grids. The original algorithm has been modified toward the generation of large quadrilateral elements. The second method is based on the divide-and-conquer paradigm with the global domain recursively partitioned into sub-domains. For either method each of the resulting blocks is then meshed using transfinite interpolation and elliptic smoothing. The applicability of these methods to practical problems is demonstrated for typical geometries of fluid dynamics.

  20. Analysis of adaptive algorithms for an integrated communication network

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Barr, Matthew; Chong-Kwon, Kim

    1985-01-01

    Techniques were examined that trade communication bandwidth for decreased transmission delays. When the network is lightly used, these schemes attempt to use additional network resources to decrease communication delays. As the network utilization rises, the schemes degrade gracefully, still providing service but with minimal use of the network. Because the schemes use a combination of circuit and packet switching, they should respond to variations in the types and amounts of network traffic. Also, a combination of circuit and packet switching to support the widely varying traffic demands imposed on an integrated network was investigated. The packet switched component is best suited to bursty traffic where some delays in delivery are acceptable. The circuit switched component is reserved for traffic that must meet real time constraints. Selected packet routing algorithms that might be used in an integrated network were simulated. An integrated traffic places widely varying workload demands on a network. Adaptive algorithms were identified, ones that respond to both the transient and evolutionary changes that arise in integrated networks. A new algorithm was developed, hybrid weighted routing, that adapts to workload changes.

  1. Solving large-scale real-world telecommunication problems using a grid-based genetic algorithm

    NASA Astrophysics Data System (ADS)

    Luna, Francisco; Nebro, Antonio; Alba, Enrique; Durillo, Juan

    2008-11-01

    This article analyses the use of a grid-based genetic algorithm (GrEA) to solve a real-world instance of a problem from the telecommunication domain. The problem, known as automatic frequency planning (AFP), is used in a global system for mobile communications (GSM) networks to assign a number of fixed frequencies to a set of GSM transceivers located in the antennae of a cellular phone network. Real data instances of the AFP are very difficult to solve owing to the NP-hard nature of the problem, so combining grid computing and metaheuristics turns out to be a way to provide satisfactory solutions in a reasonable amount of time. GrEA has been deployed on a grid with up to 300 processors to solve an AFP instance of 2612 transceivers. The results not only show that significant running time reductions are achieved, but that the search capability of GrEA clearly outperforms that of the equivalent non-grid algorithm.

  2. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  3. Vortical Flow Prediction using an Adaptive Unstructured Grid Method. Chapter 11

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2009-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving vortical flow problems. The first test case concerns vortex flow over a simple 65 delta wing with different values of leading-edge radius. Although the geometry is quite simple, it poses a challenging problem for computing vortices originating from blunt leading edges. The second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the wind-tunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  4. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  5. Characterization of atmospheric contaminant sources using adaptive evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Cervone, Guido; Franzese, Pasquale; Grajdeanu, Adrian

    2010-10-01

    The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source. The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.

  6. A brief comparison between grid based real space algorithms andspectrum algorithms for electronic structure calculations

    SciTech Connect

    Wang, Lin-Wang

    2006-12-01

    Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the

  7. The development and application of the self-adaptive grid code, SAGE

    NASA Astrophysics Data System (ADS)

    Davies, Carol B.

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  8. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  9. A novel LTE scheduling algorithm for green technology in smart grid.

    PubMed

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application's priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  10. A Novel LTE Scheduling Algorithm for Green Technology in Smart Grid

    PubMed Central

    Hindia, Mohammad Nour; Reza, Ahmed Wasif; Noordin, Kamarul Ariffin; Chayon, Muhammad Hasibur Rashid

    2015-01-01

    Smart grid (SG) application is being used nowadays to meet the demand of increasing power consumption. SG application is considered as a perfect solution for combining renewable energy resources and electrical grid by means of creating a bidirectional communication channel between the two systems. In this paper, three SG applications applicable to renewable energy system, namely, distribution automation (DA), distributed energy system-storage (DER) and electrical vehicle (EV), are investigated in order to study their suitability in Long Term Evolution (LTE) network. To compensate the weakness in the existing scheduling algorithms, a novel bandwidth estimation and allocation technique and a new scheduling algorithm are proposed. The technique allocates available network resources based on application’s priority, whereas the algorithm makes scheduling decision based on dynamic weighting factors of multi-criteria to satisfy the demands (delay, past average throughput and instantaneous transmission rate) of quality of service. Finally, the simulation results demonstrate that the proposed mechanism achieves higher throughput, lower delay and lower packet loss rate for DA and DER as well as provide a degree of service for EV. In terms of fairness, the proposed algorithm shows 3%, 7 % and 9% better performance compared to exponential rule (EXP-Rule), modified-largest weighted delay first (M-LWDF) and exponential/PF (EXP/PF), respectively. PMID:25830703

  11. An interactive adaptive remeshing algorithm for the two-dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Slack, David C.; Walters, Robert W.; Lohner, R.

    1990-01-01

    An interactive adaptive remeshing algorithm utilizing a frontal grid generator and a variety of time integration schemes for the two-dimensional Euler equations on unstructured meshes is presented. Several device dependent interactive graphics interfaces have been developed along with a device independent DI-3000 interface which can be employed on any computer that has the supporting software including the Cray-2 supercomputers Voyager and Navier. The time integration methods available include: an explicit four stage Runge-Kutta and a fully implicit LU decomposition. A cell-centered finite volume upwind scheme utilizing Roe's approximate Riemann solver is developed. To obtain higher order accurate results a monotone linear reconstruction procedure proposed by Barth is utilized. Results for flow over a transonic circular arc and flow through a supersonic nozzle are examined.

  12. Path Planning Algorithms for the Adaptive Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  13. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  14. Radiation transport calculations on unstructured grids using a spatially decomposed and threaded algorithm

    SciTech Connect

    Nemanic, M K; Nowak, P

    1999-04-12

    We consider the solution of time-dependent, energy-dependent, discrete ordinates, and nonlinear radiative transfer problems on three-dimensional unstructured spatial grids. We discuss the solution of this class of transport problems, using the code TETON, on large distributed-memory multinode computers having multiple processors per ''node'' (e.g. the IBM-SP). We discuss the use of both spatial decomposition using message passing between ''nodes'' and a threading algorithm in angle on each ''node''. We present timing studies to show how this algorithm scales to hundreds and thousands of processors. We also present an energy group ''batching'' algorithm that greatly enhances cache performance. Our conclusion, after considering cache performance, storage limitations and dependencies inherent in the physics, is that a model that uses a combination of message-passing and threading is superior to one that uses message-passing alone. We present numerical evidence to support our conclusion.

  15. Wavefront sensors and algorithms for adaptive optical systems

    NASA Astrophysics Data System (ADS)

    Lukin, V. P.; Botygina, N. N.; Emaleev, O. N.; Konyaev, P. A.

    2010-07-01

    The results of recent works related to techniques and algorithms for wave-front (WF) measurement using Shack-Hartmann sensors show their high efficiency in solution of very different problems of applied optics. The goal of this paper was to develop a sensitive Shack-Hartmann sensor with high precision WF measurement capability on the base of modern technology of optical elements making and new efficient methods and computational algorithms of WF reconstruction. The Shack-Hartmann sensors sensitive to small WF aberrations are used for adaptive optical systems, compensating the wave distortions caused by atmospheric turbulence. A high precision Shack-Hartmann WF sensor has been developed on the basis of a low-aperture off-axis diffraction lens array. The device is capable of measuring WF slopes at array sub-apertures of size 640×640 μm with an error not exceeding 4.80 arcsec (0.15 pixel), which corresponds to the standard deviation equal to 0.017λ at the reconstructed WF with wavelength λ . Also the modification of this sensor for adaptive system of solar telescope using extended scenes as tracking objects, such as sunspot, pores, solar granulation and limb, is presented. The software package developed for the proposed WF sensors includes three algorithms of local WF slopes estimation (modified centroids, normalized cross-correlation and fast Fourierdemodulation), as well as three methods of WF reconstruction (modal Zernike polynomials expansion, deformable mirror response functions expansion and phase unwrapping), that can be selected during operation with accordance to the application.

  16. A novel adaptive multi-resolution combined watermarking algorithm

    NASA Astrophysics Data System (ADS)

    Feng, Gui; Lin, QiWei

    2008-04-01

    The rapid development of IT and WWW technique, causing person frequently confronts with various kinds of authorized identification problem, especially the copyright problem of digital products. The digital watermarking technique was emerged as one kind of solutions. The balance between robustness and imperceptibility is always the object sought by related researchers. In order to settle the problem of robustness and imperceptibility, a novel adaptive multi-resolution combined digital image watermarking algorithm was proposed in this paper. In the proposed algorithm, we first decompose the watermark into several sub-bands, and according to its significance to embed the sub-band to different DWT coefficient of the carrier image. While embedding, the HVS was considered. So under the precondition of keeping the quality of image, the larger capacity of watermark can be embedding. The experimental results have shown that the proposed algorithm has better performance in the aspects of robustness and security. And with the same visual quality, the technique has larger capacity. So the unification of robustness and imperceptibility was achieved.

  17. Double-layer evolutionary algorithm for distributed optimization of particle detection on the Grid

    NASA Astrophysics Data System (ADS)

    Padée, Adam; Kurek, Krzysztof; Zaremba, Krzysztof

    2013-08-01

    Reconstruction of particle tracks from information collected by position-sensitive detectors is an important procedure in HEP experiments. It is usually controlled by a set of numerical parameters which have to be manually optimized. This paper proposes an automatic approach to this task by utilizing evolutionary algorithm (EA) operating on both real-valued and binary representations. Because of computational complexity of the task a special distributed architecture of the algorithm is proposed, designed to be run in grid environment. It is two-level hierarchical hybrid utilizing asynchronous master-slave EA on the level of clusters and island model EA on the level of the grid. The technical aspects of usage of production grid infrastructure are covered, including communication protocols on both levels. The paper deals also with the problem of heterogeneity of the resources, presenting efficiency tests on a benchmark function. These tests confirm that even relatively small islands (clusters) can be beneficial to the optimization process when connected to the larger ones. Finally a real-life usage example is presented, which is an optimization of track reconstruction in Large Angle Spectrometer of NA-58 COMPASS experiment held at CERN, using a sample of Monte Carlo simulated data. The overall reconstruction efficiency gain, achieved by the proposed method, is more than 4%, compared to the manually optimized parameters.

  18. Grid digital elevation model based algorithms for determination of hillslope width functions through flow distance transforms

    NASA Astrophysics Data System (ADS)

    Liu, Jintao; Chen, Xi; Zhang, Xingnan; Hoagland, Kyle D.

    2012-04-01

    Recently developed hillslope storage dynamics theory can represent the essential physical behavior of a natural system by accounting explicitly for the plan shape of a hillslope in an elegant and simple way. As a result, this theory is promising for improving catchment-scale hydrologic modeling. In this study, grid digital elevation model (DEM) based algorithms for determination of hillslope geometric characteristics (e.g., hillslope units and width functions in hillslope storage dynamics models) are presented. This study further develops a method for hillslope partitioning, established by Fan and Bras (1998), by applying it on a grid network. On the basis of hillslope unit derivation, a flow distance transforms method (TD∞) is suggested in order to decrease the systematic error of grid DEM-based flow distance calculation caused by flow direction approximation to streamlines. Hillslope width transfer functions are then derived to convert the probability density functions of flow distance into hillslope width functions. These algorithms are applied and evaluated on five abstract hillslopes, and detailed tests and analyses are carried out by comparing the derivation results with theoretical width functions. The results demonstrate that the TD∞ improves estimations of the flow distance and thus hillslope width function. As the proposed procedures are further applied in a natural catchment, we find that the natural hillslope width function can be well fitted by the Gaussian function. This finding is very important for applying the newly developed hillslope storage dynamics models in a real catchment.

  19. Applying the uniform resampling (URS) algorithm to a lissajous trajectory: fast image reconstruction with optimal gridding.

    PubMed

    Moriguchi, H; Wendt, M; Duerk, J L

    2000-11-01

    Various kinds of nonrectilinear Cartesian k-space trajectories have been studied, such as spiral, circular, and rosette trajectories. Although the nonrectilinear Cartesian sampling techniques generally have the advantage of fast data acquisition, the gridding process prior to 2D-FFT image reconstruction usually requires a number of additional calculations, thus necessitating an increase in the computation time. Further, the reconstructed image often exhibits artifacts resulting from both the k-space sampling pattern and the gridding procedure. To date, it has been demonstrated in only a few studies that the special geometric sampling patterns of certain specific trajectories facilitate fast image reconstruction. In other words, the inherent link among the trajectory, the sampling scheme, and the associated complexity of the regridding/reconstruction process has been investigated to only a limited extent. In this study, it is demonstrated that a Lissajous trajectory has the special geometric characteristics necessary for rapid reconstruction of nonrectilinear Cartesian k-space trajectories with constant sampling time intervals. Because of the applicability of a uniform resampling (URS) algorithm, a high-quality reconstructed image is obtained in a short reconstruction time when compared to other gridding algorithms. PMID:11064412

  20. The generalized frequency-domain adaptive filtering algorithm as an approximation of the block recursive least-squares algorithm

    NASA Astrophysics Data System (ADS)

    Schneider, Martin; Kellermann, Walter

    2016-01-01

    Acoustic echo cancellation (AEC) is a well-known application of adaptive filters in communication acoustics. To implement AEC for multichannel reproduction systems, powerful adaptation algorithms like the generalized frequency-domain adaptive filtering (GFDAF) algorithm are required for satisfactory convergence behavior. In this paper, the GFDAF algorithm is rigorously derived as an approximation of the block recursive least-squares (RLS) algorithm. Thereby, the original formulation of the GFDAF algorithm is generalized while avoiding an error that has been in the original derivation. The presented algorithm formulation is applied to pruned transform-domain loudspeaker-enclosure-microphone models in a mathematically consistent manner. Such pruned models have recently been proposed to cope with the tremendous computational demands of massive multichannel AEC. Beyond its generalization, a regularization of the GFDAF is shown to have a close relation to the well-known block least-mean-squares algorithm.

  1. LC-Grid: a linear global contact search algorithm for finite element analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hu; Lei, Zhou; Zang, Mengyan

    2014-11-01

    The contact searching is computationally intensive and its memory requirement is highly demanding; therefore, it is significant to develop an efficient contact search algorithm with less memory required. In this paper, we propose an efficient global contact search algorithm with linear complexity in terms of computational cost and memory requirement for the finite element analysis of contact problems. This algorithm is named LC-Grid (Lei devised the algorithm and Chen implemented it). The contact space is decomposed; thereafter, all contact nodes and segments are firstly mapped onto layers, then onto rows and lastly onto cells. In each mapping level, the linked-list technique is used for the efficient storing and retrieval of contact nodes and segments. The contact detection is performed in each non-empty cell along non-empty rows in each non-empty layer, and moves to the next non-empty layer once a layer is completed. The use of migration strategy makes the algorithm insensitive to mesh size. The properties of this algorithm are investigated and numerically verified to be linearly proportional to the number of contact segments. Besides, the ideal ranges of two significant scale factors of cell size and buffer zone which strongly affect computational efficiency are determined via an illustrative example.

  2. Iso-deviant 2D gridding with efficient adaptive gridder for littoral environments (EAGLE)

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2005-09-01

    Transmission loss (TL) computations in littoral areas require a dense spatial and azimuthal grid to achieve acceptable accuracy and detail. The computational cost of accurate predictions led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces sparse, irregular acoustic grids, with controlled accuracy. Recent work to further increase accuracy and efficiency with better metrics and interpolation led to EAGLE (Efficient Adaptive Gridder for Littoral Environments). On each iteration, EAGLE produces grids with approximately constant spatial uncertainty (hence, iso-deviance), yielding predictions with ever-increasing resolution and accuracy. The EAGLE point-selection mechanism is tested using the predictive error metric and 2D synthetic data sets created from combinations of simple signal functions (e.g., polynomials, sines, cosines, exponentials), along with white and chromatic noise. The speed, efficiency, fidelity, and iso-deviance of EAGLE are determined for each combination of signal, noise, and interpolator. The results show significant efficiency enhancements compared to uniform grids of the same accuracy. [Work sponsored by NAVAIR.

  3. A Competency-Based Guided-Learning Algorithm Applied on Adaptively Guiding E-Learning

    ERIC Educational Resources Information Center

    Hsu, Wei-Chih; Li, Cheng-Hsiu

    2015-01-01

    This paper presents a new algorithm called competency-based guided-learning algorithm (CBGLA), which can be applied on adaptively guiding e-learning. Computational process analysis and mathematical derivation of competency-based learning (CBL) were used to develop the CBGLA. The proposed algorithm could generate an effective adaptively guiding…

  4. A Patched-Grid Algorithm for Complex Configurations Directed Towards the F/A-18 Aircraft

    NASA Technical Reports Server (NTRS)

    Thomas, James L.; Walters, Robert W.; Reu, Taekyu; Ghaffari, Farhad; Weston, Robert P.; Luckring, James M.

    1989-01-01

    A patched-grid algorithm for the analysis of complex configurations with an implicit, upwind-biased Navier-Stokes solver is presented. Results from both a spatial-flux and a time-flux conservation approach to patching across zonal boundaries are presented. A generalized coordinate transformation with a biquadratic geometric element is used at the zonal interface in order to treat highly stretched viscous grids and arbitrarily-shaped zonal boundaries. Applications are made to the F-18 forebody-strake configuration at subsonic, high-alpha conditions. Computed surface flow patterns compare well with ground-based and flight-test results; the large effect of Reynolds number on the forebody flow-field is shown.

  5. Time-domain analysis of planar microstrip devices using a generalized Yee-algorithm based on unstructured grids

    NASA Technical Reports Server (NTRS)

    Gedney, Stephen D.; Lansing, Faiza

    1993-01-01

    The generalized Yee-algorithm is presented for the temporal full-wave analysis of planar microstrip devices. This algorithm has the significant advantage over the traditional Yee-algorithm in that it is based on unstructured and irregular grids. The robustness of the generalized Yee-algorithm is that structures that contain curved conductors or complex three-dimensional geometries can be more accurately, and much more conveniently modeled using standard automatic grid generation techniques. This generalized Yee-algorithm is based on the the time-marching solution of the discrete form of Maxwell's equations in their integral form. To this end, the electric and magnetic fields are discretized over a dual, irregular, and unstructured grid. The primary grid is assumed to be composed of general fitted polyhedra distributed throughout the volume. The secondary grid (or dual grid) is built up of the closed polyhedra whose edges connect the centroid's of adjacent primary cells, penetrating shared faces. Faraday's law and Ampere's law are used to update the fields normal to the primary and secondary grid faces, respectively. Subsequently, a correction scheme is introduced to project the normal fields onto the grid edges. It is shown that this scheme is stable, maintains second-order accuracy, and preserves the divergenceless nature of the flux densities. Finally, for computational efficiency the algorithm is structured as a series of sparse matrix-vector multiplications. Based on this scheme, the generalized Yee-algorithm has been implemented on vector and parallel high performance computers in a highly efficient manner.

  6. Temporal-adaptive Euler/Navier-Stokes algorithm for unsteady aerodynamic analysis of airfoils using unstructured dynamic meshes

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Williams, Marc H.; Batina, John T.

    1990-01-01

    A temporal adaptive algorithm for the time-integration of the two-dimensional Euler or Navier-Stokes equations is presented. The flow solver involves an upwind flux-split spatial discretization for the convective terms and central differencing for the shear-stress and heat flux terms on an unstructured mesh of triangles. The temporal adaptive algorithm is a time-accurate integration procedure which allows flows with high spatial and temporal gradients to be computed efficiently by advancing each grid cell near its maximum allowable time step. Results indicate that an appreciable computational savings can be achieved for both inviscid and viscous unsteady airfoil problems using unstructured meshes without degrading spatial or temporal accuracy.

  7. X3D moving grid methods for semiconductor applications

    SciTech Connect

    Kuprat, A.; Cartwright, D.; Gammel, J.T.; George, D.; Kendrick, B.; Kilcrease, D.; Trease, H.; Walker, R.

    1997-11-01

    The Los Alamos 3D grid toolbox handles grid maintenance chores and provides access to a sophisticated set of optimization algorithms for unstructured grids. The application of these tools to semiconductor problems is illustrated in three examples: grain growth, topographic deposition and electrostatics. These examples demonstrate adaptive smoothing, front tracking, and automatic, adaptive refinement/derefinement.

  8. An adaptive quadrature-free implementation of the high-order spectral volume method on unstructured grids

    NASA Astrophysics Data System (ADS)

    Harris, Robert Evan

    2008-10-01

    An efficient implementation of the high-order spectral volume (SV) method is presented for multi-dimensional conservation laws on unstructured grids. In the SV method, each simplex cell is called a spectral volume (SV), and the SV is further subdivided into polygonal (2D), or polyhedral (3D) control volumes (CVs) to support high-order data reconstructions. In the traditional implementation, Gauss quadrature formulas are used to approximate the flux integrals on all faces. In the new approach, a nodal set is selected and used to reconstruct a high-order polynomial approximation for the flux vector, and then the flux integrals on the internal faces are computed analytically, without the need for Gauss quadrature formulas. This gives a significant advantage over the traditional SV method in efficiency and ease of implementation. Fundamental properties of the new SV implementation are studied and high-order accuracy is demonstrated for linear and nonlinear advection equations, and the Euler equations. The new quadrature-free approach is then extended to handle local adaptive hp-refinement (grid and order refinement). Efficient edge-based adaptation utilizing a binary tree search algorithm is employed. Several different adaptation criteria which focus computational effort near high gradient regions are presented. Both h- and p-refinements are presented in a general framework where it is possible to perform either or both on any grid cell at any time. Several well-known inviscid flow test cases, subjected to various levels of adaptation, are utilized to demonstrate the effectiveness of the method. An analysis of the accuracy and stability properties of the spectral volume (SV) method is then presented. The current work seeks to address the issue of stability, as well as polynomial quality, in the design of SV partitions. A new approach is presented, which efficiently locates stable partitions by means of constrained minimization. Once stable partitions are located, a

  9. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  10. Adaptive centroid-finding algorithm for freeform surface measurements.

    PubMed

    Guo, Wenjiang; Zhao, Liping; Tong, Chin Shi; I-Ming, Chen; Joshi, Sunil Chandrakant

    2013-04-01

    Wavefront sensing systems measure the slope or curvature of a surface by calculating the centroid displacement of two focal spot images. Accurately finding the centroid of each focal spot determines the measurement results. This paper studied several widely used centroid-finding techniques and observed that thresholding is the most critical factor affecting the centroid-finding accuracy. Since the focal spot image of a freeform surface usually suffers from various types of image degradation, it is difficult and sometimes impossible to set a best threshold value for the whole image. We propose an adaptive centroid-finding algorithm to tackle this problem and have experimentally proven its effectiveness in measuring freeform surfaces. PMID:23545985

  11. An adaptive genetic algorithm for crystal structure prediction

    SciTech Connect

    Wu, Shunqing; Ji, Min; Wang, Cai-Zhuang; Nguyen, Manh Cuong; Zhao, Xin; Umemoto, K.; Wentzcovitch, R. M.; Ho, Kai-Ming

    2013-12-18

    We present a genetic algorithm (GA) for structural search that combines the speed of structure exploration by classical potentials with the accuracy of density functional theory (DFT) calculations in an adaptive and iterative way. This strategy increases the efficiency of the DFT-based GA by several orders of magnitude. This gain allows a considerable increase in the size and complexity of systems that can be studied by first principles. The performance of the method is illustrated by successful structure identifications of complex binary and ternary intermetallic compounds with 36 and 54 atoms per cell, respectively. The discovery of a multi-TPa Mg-silicate phase with unit cell containing up to 56 atoms is also reported. Such a phase is likely to be an essential component of terrestrial exoplanetary mantles.

  12. Self-adaptive closed constrained solution algorithms for nonlinear conduction

    NASA Technical Reports Server (NTRS)

    Padovan, J.; Tovichakchaikul, S.

    1982-01-01

    Self-adaptive solution algorithms are developed for nonlinear heat conduction problems encountered in analyzing materials for use in high temperature or cryogenic conditions. The nonlinear effects are noted to occur due to convection and radiation effects, as well as temperature-dependent properties of the materials. Incremental successive substitution (ISS) and Newton-Raphson (NR) procedures are treated as extrapolation schemes which have solution projections bounded by a hyperline with an externally applied thermal load vector arising from internal heat generation and boundary conditions. Closed constraints are formulated which improve the efficiency and stability of the procedures by employing closed ellipsoidal surfaces to control the size of successive iterations. Governing equations are defined for nonlinear finite element models, and comparisons are made of results using the the new method and the ISS and NR schemes for epoxy, PVC, and CuGe.

  13. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  14. A Fast and Robust Poisson-Boltzmann Solver Based on Adaptive Cartesian Grids.

    PubMed

    Boschitsch, Alexander H; Fenley, Marcia O

    2011-05-10

    An adaptive Cartesian grid (ACG) concept is presented for the fast and robust numerical solution of the 3D Poisson-Boltzmann Equation (PBE) governing the electrostatic interactions of large-scale biomolecules and highly charged multi-biomolecular assemblies such as ribosomes and viruses. The ACG offers numerous advantages over competing grid topologies such as regular 3D lattices and unstructured grids. For very large biological molecules and multi-biomolecule assemblies, the total number of grid-points is several orders of magnitude less than that required in a conventional lattice grid used in the current PBE solvers thus allowing the end user to obtain accurate and stable nonlinear PBE solutions on a desktop computer. Compared to tetrahedral-based unstructured grids, ACG offers a simpler hierarchical grid structure, which is naturally suited to multigrid, relieves indirect addressing requirements and uses fewer neighboring nodes in the finite difference stencils. Construction of the ACG and determination of the dielectric/ionic maps are straightforward, fast and require minimal user intervention. Charge singularities are eliminated by reformulating the problem to produce the reaction field potential in the molecular interior and the total electrostatic potential in the exterior ionic solvent region. This approach minimizes grid-dependency and alleviates the need for fine grid spacing near atomic charge sites. The technical portion of this paper contains three parts. First, the ACG and its construction for general biomolecular geometries are described. Next, a discrete approximation to the PBE upon this mesh is derived. Finally, the overall solution procedure and multigrid implementation are summarized. Results obtained with the ACG-based PBE solver are presented for: (i) a low dielectric spherical cavity, containing interior point charges, embedded in a high dielectric ionic solvent - analytical solutions are available for this case, thus allowing rigorous

  15. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  16. A hybrid multi-loop genetic-algorithm/simplex/spatial-grid method for locating the optimum orientation of an adsorbed protein on a solid surface

    NASA Astrophysics Data System (ADS)

    Wei, Tao; Mu, Shengjing; Nakano, Aiichiro; Shing, Katherine

    2009-05-01

    Atomistic simulation of protein adsorption on a solid surface in aqueous environment is computationally demanding, therefore the determination of preferred protein orientations on the solid surface usually serves as an initial step in simulation studies. We have developed a hybrid multi-loop genetic-algorithm/simplex/spatial-grid method to search for low adsorption-energy orientations of a protein molecule on a solid surface. In this method, the surface and the protein molecule are treated as rigid bodies, whereas the bulk fluid is represented by spatial grids. For each grid point, an effective interaction region in the surface is defined by a cutoff distance, and the possible interaction energy between an atom at the grid point and the surface is calculated and recorded in a database. In searching for the optimum position and orientation, the protein molecule is translated and rotated as a rigid body with the configuration obtained from a previous Molecular Dynamic simulation. The orientation-dependent protein-surface interaction energy is obtained using the generated database of grid energies. The hybrid search procedure consists of two interlinked loops. In the first loop A, a genetic algorithm (GA) is applied to identify promising regions for the global energy minimum and a local optimizer with the derivative-free Nelder-Mead simplex method is used to search for the lowest-energy orientation within the identified regions. In the second loop B, a new population for GA is generated and competitive solution from loop A is improved. Switching between the two loops is adaptively controlled by the use of similarity analysis. We test the method for lysozyme adsorption on a hydrophobic hydrogen-terminated silicon (110) surface in implicit water (i.e., a continuum distance-dependent dielectric constant). The results show that the hybrid search method has faster convergence and better solution accuracy compared with the conventional genetic algorithm.

  17. Design of infrasound-detection system via adaptive LMSTDE algorithm

    NASA Technical Reports Server (NTRS)

    Khalaf, C. S.; Stoughton, J. W.

    1984-01-01

    A proposed solution to an aviation safety problem is based on passive detection of turbulent weather phenomena through their infrasonic emission. This thesis describes a system design that is adequate for detection and bearing evaluation of infrasounds. An array of four sensors, with the appropriate hardware, is used for the detection part. Bearing evaluation is based on estimates of time delays between sensor outputs. The generalized cross correlation (GCC), as the conventional time-delay estimation (TDE) method, is first reviewed. An adaptive TDE approach, using the least mean square (LMS) algorithm, is then discussed. A comparison between the two techniques is made and the advantages of the adaptive approach are listed. The behavior of the GCC, as a Roth processor, is examined for the anticipated signals. It is shown that the Roth processor has the desired effect of sharpening the peak of the correlation function. It is also shown that the LMSTDE technique is an equivalent implementation of the Roth processor in the time domain. A LMSTDE lead-lag model, with a variable stability coefficient and a convergence criterion, is designed.

  18. A wavelet packet adaptive filtering algorithm for enhancing manatee vocalizations.

    PubMed

    Gur, M Berke; Niezrecki, Christopher

    2011-04-01

    Approximately a quarter of all West Indian manatee (Trichechus manatus latirostris) mortalities are attributed to collisions with watercraft. A boater warning system based on the passive acoustic detection of manatee vocalizations is one possible solution to reduce manatee-watercraft collisions. The success of such a warning system depends on effective enhancement of the vocalization signals in the presence of high levels of background noise, in particular, noise emitted from watercraft. Recent research has indicated that wavelet domain pre-processing of the noisy vocalizations is capable of significantly improving the detection ranges of passive acoustic vocalization detectors. In this paper, an adaptive denoising procedure, implemented on the wavelet packet transform coefficients obtained from the noisy vocalization signals, is investigated. The proposed denoising algorithm is shown to improve the manatee detection ranges by a factor ranging from two (minimum) to sixteen (maximum) compared to high-pass filtering alone, when evaluated using real manatee vocalization and background noise signals of varying signal-to-noise ratios (SNR). Furthermore, the proposed method is also shown to outperform a previously suggested feedback adaptive line enhancer (FALE) filter on average 3.4 dB in terms of noise suppression and 0.6 dB in terms of waveform preservation. PMID:21476661

  19. Extension of a streamwise upwind algorithm to a moving grid system

    NASA Technical Reports Server (NTRS)

    Obayashi, Shigeru; Goorjian, Peter M.; Guruswamy, Guru P.

    1990-01-01

    A new streamwise upwind algorithm was derived to compute unsteady flow fields with the use of a moving-grid system. The temporally nonconservative LU-ADI (lower-upper-factored, alternating-direction-implicit) method was applied for time marching computations. A comparison of the temporally nonconservative method with a time-conservative implicit upwind method indicates that the solutions are insensitive to the conservative properties of the implicit solvers when practical time steps are used. Using this new method, computations were made for an oscillating wing at a transonic Mach number. The computed results confirm that the present upwind scheme captures the shock motion better than the central-difference scheme based on the beam-warming algorithm. The new upwind option of the code allows larger time-steps and thus is more efficient, even though it requires slightly more computational time per time step than the central-difference option.

  20. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGESBeta

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; Burkardt, John V.

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  1. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D; Burkardt, John V

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hi- erarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the the- oretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a func- tion representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smooth- ness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity anal- yses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  2. A class of staggered grid algorithms and analysis for time-domain Maxwell systems

    NASA Astrophysics Data System (ADS)

    Charlesworth, Alexander E.

    We describe, implement, and analyze a class of staggered grid algorithms for efficient simulation and analysis of time-domain Maxwell systems in the case of heterogeneous, conductive, and nondispersive, isotropic, linear media. We provide the derivation of a continuous mathematical model from the Maxwell equations in vacuum; however, the complexity of this system necessitates the use of computational methods for approximately solving for the physical unknowns. The finite difference approximation has been used for partial differential equations and the Maxwell Equations in particular for many years. We develop staggered grid based finite difference discrete operators as a class of approximations to continuous operators based on second order in time and various order approximations to the electric and magnetic field at staggered grid locations. A generalized parameterized operator which can be specified to any of this class of discrete operators is then applied to the Maxwell system and hence we develop discrete approximations through various choices of parameters in the approximation. We describe analysis of the resulting discrete system as an approximation to the continuous system. Using the comparison of dispersion analysis for the discrete and continuous systems, we derive a third difference approximation, in addition to the known (2, 2) and (2, 4) schemes. We conclude by providing the comparison of these three methods by simulating the Maxwell system for several choices of parameters in the system.

  3. An adaptive sparse-grid high-order stochastic collocation method for Bayesian inference in groundwater reactive transport modeling

    SciTech Connect

    Zhang, Guannan; Webster, Clayton G; Gunzburger, Max D

    2012-09-01

    Although Bayesian analysis has become vital to the quantification of prediction uncertainty in groundwater modeling, its application has been hindered due to the computational cost associated with numerous model executions needed for exploring the posterior probability density function (PPDF) of model parameters. This is particularly the case when the PPDF is estimated using Markov Chain Monte Carlo (MCMC) sampling. In this study, we develop a new approach that improves computational efficiency of Bayesian inference by constructing a surrogate system based on an adaptive sparse-grid high-order stochastic collocation (aSG-hSC) method. Unlike previous works using first-order hierarchical basis, we utilize a compactly supported higher-order hierar- chical basis to construct the surrogate system, resulting in a significant reduction in the number of computational simulations required. In addition, we use hierarchical surplus as an error indi- cator to determine adaptive sparse grids. This allows local refinement in the uncertain domain and/or anisotropic detection with respect to the random model parameters, which further improves computational efficiency. Finally, we incorporate a global optimization technique and propose an iterative algorithm for building the surrogate system for the PPDF with multiple significant modes. Once the surrogate system is determined, the PPDF can be evaluated by sampling the surrogate system directly with very little computational cost. The developed method is evaluated first using a simple analytical density function with multiple modes and then using two synthetic groundwater reactive transport models. The groundwater models represent different levels of complexity; the first example involves coupled linear reactions and the second example simulates nonlinear ura- nium surface complexation. The results show that the aSG-hSC is an effective and efficient tool for Bayesian inference in groundwater modeling in comparison with conventional

  4. Sparsity-Cognizant Algorithms with Applications to Communications, Signal Processing, and the Smart Grid

    NASA Astrophysics Data System (ADS)

    Zhu, Hao

    Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher

  5. Adaptive finite-volume WENO schemes on dynamically redistributed grids for compressible Euler equations

    NASA Astrophysics Data System (ADS)

    Pathak, Harshavardhana S.; Shukla, Ratnesh K.

    2016-08-01

    A high-order adaptive finite-volume method is presented for simulating inviscid compressible flows on time-dependent redistributed grids. The method achieves dynamic adaptation through a combination of time-dependent mesh node clustering in regions characterized by strong solution gradients and an optimal selection of the order of accuracy and the associated reconstruction stencil in a conservative finite-volume framework. This combined approach maximizes spatial resolution in discontinuous regions that require low-order approximations for oscillation-free shock capturing. Over smooth regions, high-order discretization through finite-volume WENO schemes minimizes numerical dissipation and provides excellent resolution of intricate flow features. The method including the moving mesh equations and the compressible flow solver is formulated entirely on a transformed time-independent computational domain discretized using a simple uniform Cartesian mesh. Approximations for the metric terms that enforce discrete geometric conservation law while preserving the fourth-order accuracy of the two-point Gaussian quadrature rule are developed. Spurious Cartesian grid induced shock instabilities such as carbuncles that feature in a local one-dimensional contact capturing treatment along the cell face normals are effectively eliminated through upwind flux calculation using a rotated Hartex-Lax-van Leer contact resolving (HLLC) approximate Riemann solver for the Euler equations in generalized coordinates. Numerical experiments with the fifth and ninth-order WENO reconstructions at the two-point Gaussian quadrature nodes, over a range of challenging test cases, indicate that the redistributed mesh effectively adapts to the dynamic flow gradients thereby improving the solution accuracy substantially even when the initial starting mesh is non-adaptive. The high adaptivity combined with the fifth and especially the ninth-order WENO reconstruction allows remarkably sharp capture of

  6. Evaluating two sparse grid surrogates and two adaptation criteria for groundwater Bayesian uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Zeng, Xiankui; Ye, Ming; Burkardt, John; Wu, Jichun; Wang, Dong; Zhu, Xiaobin

    2016-04-01

    Sparse grid (SG) stochastic collocation methods have been recently used to build accurate but cheap-to-run surrogates for groundwater models to reduce the computational burden of Bayesian uncertainty analysis. The surrogates can be built for either a log-likelihood function or state variables such as hydraulic head and solute concentration. Using a synthetic groundwater flow model, this study evaluates the log-likelihood and head surrogates in terms of the computational cost of building them, the accuracy of the surrogates, and the accuracy of the distributions of model parameters and predictions obtained using the surrogates. The head surrogates outperform the log-likelihood surrogates for the following four reasons: (1) the shape of the head response surface is smoother than that of the log-likelihood response surface in parameter space, (2) the head variation is smaller than the log-likelihood variation in parameter space, (3) the interpolation error of the head surrogates does not accumulate to be larger than the interpolation error of the log-likelihood surrogates, and (4) the model simulations needed for building one head surrogate can be recycled for building others. For both log-likelihood and head surrogates, adaptive sparse grids are built using two indicators: absolute error and relative error. The adaptive head surrogates are insensitive to the error indicators, because the ratio between the two indicators is hydraulic head, which has small variation in the parameter space. The adaptive log-likelihood surrogates based on the relative error indicators outperform those based on the absolute error indicators, because adaptation based on the relative error indicator puts more sparse-grid nodes in the areas in the parameter space where the log-likelihood is high. While our numerical study suggests building state-variable surrogates and using the relative error indicator for building log-likelihood surrogates, selecting appropriate type of surrogates and

  7. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  8. Hybrid Grid Generation Using NW Grid

    SciTech Connect

    Jones-Oliveira, Janet B.; Oliveira, Joseph S.; Trease, Lynn L.; Trease, Harold E.; B.K. Soni, J. Hauser, J.F. Thompson, P.R. Eiseman

    2000-09-01

    We describe the development and use of a hybrid n-dimensional grid generation system called NWGRID. The Applied Mathematics Group at Pacific Northwest National Laboratory (PNNL) is developing this tool to support the Laboratory's computational science efforts in chemistry, biology, engineering and environmental (subsurface and atmospheric) modeling. NWGRID is the grid generation system, which is designed for multi-scale, multi-material, multi-physics, time-dependent, 3-D, hybrid grids that are either statically adapted or evolved in time. NWGRID'S capabilities include static and dynamic grids, hybrid grids, managing colliding surfaces, and grid optimization[using reconnections, smoothing, and adaptive mesh refinement (AMR) algorithms]. NWGRID'S data structure can manage an arbitrary number of grid objects, each with an arbitrary number of grid attributes. NWGRID uses surface geometry to build volumes by using combinations of Boolean operators and order relations. Point distributions can be input, generated using either ray shooting techniques or defined point-by-point. Connectivity matrices are then generated automatically for all variations of hybrid grids.

  9. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  10. Evaluating Knowledge Structure-Based Adaptive Testing Algorithms and System Development

    ERIC Educational Resources Information Center

    Wu, Huey-Min; Kuo, Bor-Chen; Yang, Jinn-Min

    2012-01-01

    In recent years, many computerized test systems have been developed for diagnosing students' learning profiles. Nevertheless, it remains a challenging issue to find an adaptive testing algorithm to both shorten testing time and precisely diagnose the knowledge status of students. In order to find a suitable algorithm, four adaptive testing…

  11. Micro Benchmarking, Performance Assertions and Sensitivity Analysis: A Technique for Developing Adaptive Grid Applications

    SciTech Connect

    Corey, I R; Johnson, J R; Vetter, J S

    2002-02-25

    This study presents a technique that can significantly improve the performance of a distributed application by allowing the application to locally adapt to architectural characteristics of distinct resources in a distributed system. Application performance is sensitive to application parameter--system architecture pairings. In a distributed or Grid enabled applciation, a single parameter configuration for the whole application will not always be optimal for every participating resource. In particular, some configurations can significantly degrade performance. Furthermore, the behavior of a system may change during the course of the run. The technique described here provides an automated mechanism for run-time adaptation of application parameters to the local system architecture. Using a simulation of a Monte Carlo physics code, the authors demonstrate that this technique can achieve speedups of 18%-37% on individual resources in a distributed environment.

  12. Adaptable Particle-in-Cell Algorithms for Graphical Processing Units

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor; Singh, Tajendra

    2010-11-01

    Emerging computer architectures consist of an increasing number of shared memory computing cores in a chip, often with vector (SIMD) co-processors. Future exascale high performance systems will consist of a hierarchy of such nodes, which will require different algorithms at different levels. Since no one knows exactly how the future will evolve, we have begun development of an adaptable Particle-in-Cell (PIC) code, whose parameters can match different hardware configurations. The data structures reflect three levels of parallelism, contiguous vectors and non-contiguous blocks of vectors, which can share memory, and groups of blocks which do not. Particles are kept ordered at each time step, and the size of a sorting cell is an adjustable parameter. We have implemented a simple 2D electrostatic skeleton code whose inner loop (containing 6 subroutines) runs entirely on the NVIDIA Tesla C1060. We obtained speedups of about 16-25 compared to a 2.66 GHz Intel i7 (Nehalem), depending on the plasma temperature, with an asymptotic limit of 40 for a frozen plasma. We expect speedups of about 70 for an 2D electromagnetic code and about 100 for a 3D electromagnetic code, which have higher computational intensities (more flops/memory access).

  13. An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anerson, W. Kyle; Bonhaus, Daryl L.

    1994-01-01

    An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.

  14. Fair Energy Scheduling for Vehicle-to-Grid Networks Using Adaptive Dynamic Programming.

    PubMed

    Xie, Shengli; Zhong, Weifeng; Xie, Kan; Yu, Rong; Zhang, Yan

    2016-08-01

    Research on the smart grid is being given enormous supports worldwide due to its great significance in solving environmental and energy crises. Electric vehicles (EVs), which are powered by clean energy, are adopted increasingly year by year. It is predictable that the huge charge load caused by high EV penetration will have a considerable impact on the reliability of the smart grid. Therefore, fair energy scheduling for EV charge and discharge is proposed in this paper. By using the vehicle-to-grid technology, the scheduler controls the electricity loads of EVs considering fairness in the residential distribution network. We propose contribution-based fairness, in which EVs with high contributions have high priorities to obtain charge energy. The contribution value is defined by both the charge/discharge energy and the timing of the action. EVs can achieve higher contribution values when discharging during the load peak hours. However, charging during this time will decrease the contribution values seriously. We formulate the fair energy scheduling problem as an infinite-horizon Markov decision process. The methodology of adaptive dynamic programming is employed to maximize the long-term fairness by processing online network training. The numerical results illustrate that the proposed EV energy scheduling is able to mitigate and flatten the peak load in the distribution network. Furthermore, contribution-based fairness achieves a fast recovery of EV batteries that have deeply discharged and guarantee fairness in the full charge time of all EVs. PMID:26930694

  15. Implementations of the optimal multigrid algorithm for the cell-centered finite difference on equilateral triangular grids

    SciTech Connect

    Ewing, R.E.; Saevareid, O.; Shen, J.

    1994-12-31

    A multigrid algorithm for the cell-centered finite difference on equilateral triangular grids for solving second-order elliptic problems is proposed. This finite difference is a four-point star stencil in a two-dimensional domain and a five-point star stencil in a three dimensional domain. According to the authors analysis, the advantages of this finite difference are that it is an O(h{sup 2})-order accurate numerical scheme for both the solution and derivatives on equilateral triangular grids, the structure of the scheme is perhaps the simplest, and its corresponding multigrid algorithm is easily constructed with an optimal convergence rate. They are interested in relaxation of the equilateral triangular grid condition to certain general triangular grids and the application of this multigrid algorithm as a numerically reasonable preconditioner for the lowest-order Raviart-Thomas mixed triangular finite element method. Numerical test results are presented to demonstrate their analytical results and to investigate the applications of this multigrid algorithm on general triangular grids.

  16. Genetic algorithms for optimal reactive power compensation planning on the national grid system

    NASA Astrophysics Data System (ADS)

    Pilgrim, J. D.

    This work investigates the use of Genetic Algorithms (GAs) for optimal Reactive power Compensation Planning (RCP) of practical power systems. In particular, RCP of the transmission system of England and Wales as owned and operated by National Grid is considered. The GA is used to simultaneously solve both the siting problem---optimisation of the installation of new devices---and the operational problem---optimisation of preventive transformer taps and the controller characteristics of dynamic compensation devices. A computer package called Genetic Compensation Placement (GCP) has been developed which uses an Integer coded GA (IGA) to solve the RCP problem. The RCP problem is implemented as a multi-objective optimisation: in the interests of security, the number of system and operational constraint violations and the deviation of the busbar voltages from the ideal are all minimised for the base (intact) case and the contingent cases. In the interests of cost reduction, the reactive power cost is minimised for the base case. The reactive power cost encompasses the costs incurred from the installation of reactive power sources and the utilisation of new and existing dynamic reactive power compensation devices. GCP is compared to SCORPION (a planning program currently being used by National Grid) which uses a combination of linear programming and heuristic back-tracking. Results are presented for a practical test system developed with the cooperation of National Grid, and it is found that GCP produces solutions that are cheaper than solutions found by SCORPION and perform extremely well: an improvement in voltage profiles, a decrease in complex power mismatches, and a reduction in MVolt Amps-reactive (VAr) utilisation were observed.

  17. Fast algorithms for visualizing fluid motion in steady flow on unstructured grids

    NASA Technical Reports Server (NTRS)

    Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu

    1995-01-01

    The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.

  18. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395

  19. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  20. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  1. A Power Grid Optimization Algorithm by Observing Timing Error Risk by IR Drop

    NASA Astrophysics Data System (ADS)

    Kawakami, Yoshiyuki; Terao, Makoto; Fukui, Masahiro; Tsukiyama, Shuji

    With the advent of the deep submicron age, circuit performance is strongly impacted by process variations and the influence on the circuit delay to the power-supply voltage increases more and more due to CMOS feature size shrinkage. Power grid optimization which considers the timing error risk caused by the variations and IR drop becomes very important for stable and hi-speed operation of system-on-chip. Conventionally, a lot of power grid optimization algorithms have been proposed, and most of them use IR drop as their object functions. However, the IR drop is an indirect metric and we suspect that it is vague metric for the real goal of LSI design. In this paper, first, we propose an approach which uses the “timing error risk caused by IR drop” as a direct objective function. Second, the critical path map is introduced to express the existence of critical paths distributed in the entire chip. The timing error risk is decreased by using the critical path map and the new objective function. Some experimental results show the effectiveness.

  2. An Adaptive RFID Anti-Collision Algorithm Based on Dynamic Framed ALOHA

    NASA Astrophysics Data System (ADS)

    Lee, Chang Woo; Cho, Hyeonwoo; Kim, Sang Woo

    The collision of ID signals from a large number of colocated passive RFID tags is a serious problem; to realize a practical RFID systems we need an effective anti-collision algorithm. This letter presents an adaptive algorithm to minimize the total time slots and the number of rounds required for identifying the tags within the RFID reader's interrogation zone. The proposed algorithm is based on the framed ALOHA protocol, and the frame size is adaptively updated each round. Simulation results show that our proposed algorithm is more efficient than the conventional algorithms based on the framed ALOHA.

  3. An improved bi-level algorithm for partitioning dynamic grid hierarchies.

    SciTech Connect

    Deiterding, Ralf (California Institute of Technology, Pasadena, CA); Johansson, Henrik (Uppsala University, Uppsala, Sweden); Steensland, Johan; Ray, Jaideep

    2006-05-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as ''impossible'' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible bi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  4. An improved bi-level algorithm for partitioning dynamic structured grid hierarchies.

    SciTech Connect

    Deiterding, Ralf; Steensland, Johan; Ray, Jaideep

    2006-02-01

    Structured adaptive mesh refinement methods are being widely used for computer simulations of various physical phenomena. Parallel implementations potentially offer realistic simulations of complex three-dimensional applications. But achieving good scalability for large-scale applications is non-trivial. Performance is limited by the partitioner's ability to efficiently use the underlying parallel computer's resources. Designed on sound SAMR principles, Nature+Fable is a hybrid, dedicated SAMR partitioning tool that brings together the advantages of both domain-based and patch-based techniques while avoiding their drawbacks. But the original bi-level partitioning approach in Nature+Fable is insufficient as it for realistic applications regards frequently occurring bi-levels as 'impossible' and fails. This document describes an improved bi-level partitioning algorithm that successfully copes with all possible hi-levels. The improved algorithm uses the original approach side-by-side with a new, complementing approach. By using a new, customized classification method, the improved algorithm switches automatically between the two approaches. This document describes the algorithms, discusses implementation issues, and presents experimental results. The improved version of Nature+Fable was found to be able to handle realistic applications and also to generate less imbalances, similar box count, but more communication as compared to the native, domain-based partitioner in the SAMR framework AMROC.

  5. An Adaptable Power System with Software Control Algorithm

    NASA Technical Reports Server (NTRS)

    Castell, Karen; Bay, Mike; Hernandez-Pellerano, Amri; Ha, Kong

    1998-01-01

    A low cost, flexible and modular spacecraft power system design was developed in response to a call for an architecture that could accommodate multiple missions in the small to medium load range. Three upcoming satellites will use this design, with one launch date in 1999 and two in the year 2000. The design consists of modular hardware that can be scaled up or down, without additional cost, to suit missions in the 200 to 600 Watt orbital average load range. The design will be applied to satellite orbits that are circular, polar elliptical and a libration point orbit. Mission unique adaptations are accomplished in software and firmware. In designing this advanced, adaptable power system, the major goals were reduction in weight volume and cost. This power system design represents reductions in weight of 78 percent, volume of 86 percent and cost of 65 percent from previous comparable systems. The efforts to miniaturize the electronics without sacrificing performance has created streamlined power electronics with control functions residing in the system microprocessor. The power system design can handle any battery size up to 50 Amp-hour and any battery technology. The three current implementations will use both nickel cadmium and nickel hydrogen batteries ranging in size from 21 to 50 Amp-hours. Multiple batteries can be used by adding another battery module. Any solar cell technology can be used and various array layouts can be incorporated with no change in Power System Electronics (PSE) hardware. Other features of the design are the standardized interfaces between cards and subsystems and immunity to radiation effects up to 30 krad Total Ionizing Dose (TID) and 35 Mev/cm(exp 2)-kg for Single Event Effects (SEE). The control algorithm for the power system resides in a radiation-hardened microprocessor. A table driven software design allows for flexibility in mission specific requirements. By storing critical power system constants in memory, modifying the system

  6. TURBOGRID - Turbomachinery applications of grid generation

    NASA Astrophysics Data System (ADS)

    Soni, Bharat K.; Shih, Ming-Hsin

    1990-07-01

    Numerical grid generation algorithm associated with the field region about turbomachinery systems is presented. The algorithm is incorporated as a module, TIGER (Turbomachinery Interactive Grid genERation) of the modular general purpose computer code GENIE. Interactive definitions of the mathematical description of blades, hub and shroud with minimal user interactions, adaption of the weighted transfinite interpolation technique for efficient generation of grid blocks/zones, automatic construction of the Bezier curves to accomplish slope continuity, and efficient utilization of IRIS-graphics capabilities are the salient features of this algorithm which results in a significant time savings for a given turbomachinery geometry-grid application.

  7. Grid generation research at OSU

    NASA Technical Reports Server (NTRS)

    Nakamura, S.

    1992-01-01

    In the last two years, effort was concentrated on: (1) surface modeling; (2) surface grid generation; and (3) 3-D flow space grid generation. The surface modeling shares the same objectives as the surface modeling in computer aided design (CAD), so software available in CAD can in principle be used for solid modeling. Unfortunately, however, the CAD software cannot be easily used in practice for grid generation purposes, because they are not designed to provide appropriate data base for grid generation. Therefore, we started developing a generalized surface modeling software from scratch, that provides the data base for the surface grid generation. Generating surface grid is an important step in generating a 3-D space for flow space. To generate a surface grid on a given surface representation, we developed a unique algorithm that works on any non-smooth surfaces. Once the surface grid is generated, a 3-D space can be generated. For this purpose, we also developed a new algorithm, which is a hybrid of the hyperbolic and the elliptic grid generation methods. With this hybrid method, orthogonality of the grid near the solid boundary can be easily achieved without introducing empirical fudge factors. Work to develop 2-D and 3-D grids for turbomachinery blade geometries was performed, and as an extension of this research we are planning to develop an adaptive grid procedure with an interactive grid environment.

  8. New Approach for IIR Adaptive Lattice Filter Structure Using Simultaneous Perturbation Algorithm

    NASA Astrophysics Data System (ADS)

    Martinez, Jorge Ivan Medina; Nakano, Kazushi; Higuchi, Kohji

    Adaptive infinite impulse response (IIR), or recursive, filters are less attractive mainly because of the stability and the difficulties associated with their adaptive algorithms. Therefore, in this paper the adaptive IIR lattice filters are studied in order to devise algorithms that preserve the stability of the corresponding direct-form schemes. We analyze the local properties of stationary points, a transformation achieving this goal is suggested, which gives algorithms that can be efficiently implemented. Application to the Steiglitz-McBride (SM) and Simple Hyperstable Adaptive Recursive Filter (SHARF) algorithms is presented. Also a modified version of Simultaneous Perturbation Stochastic Approximation (SPSA) is presented in order to get the coefficients in a lattice form more efficiently and with a lower computational cost and complexity. The results are compared with previous lattice versions of these algorithms. These previous lattice versions may fail to preserve the stability of stationary points.

  9. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm. PMID:25265622

  10. Vectorizable algorithms for adaptive schemes for rapid analysis of SSME flows

    NASA Technical Reports Server (NTRS)

    Oden, J. Tinsley

    1987-01-01

    An initial study into vectorizable algorithms for use in adaptive schemes for various types of boundary value problems is described. The focus is on two key aspects of adaptive computational methods which are crucial in the use of such methods (for complex flow simulations such as those in the Space Shuttle Main Engine): the adaptive scheme itself and the applicability of element-by-element matrix computations in a vectorizable format for rapid calculations in adaptive mesh procedures.

  11. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  12. Two general methods for population pharmacokinetic modeling: non-parametric adaptive grid and non-parametric Bayesian

    PubMed Central

    Neely, Michael; Bartroff, Jay; van Guilder, Michael; Yamada, Walter; Bayard, David; Jelliffe, Roger; Leary, Robert; Chubatiuk, Alyona; Schumitzky, Alan

    2013-01-01

    Population pharmacokinetic (PK) modeling methods can be statistically classified as either parametric or nonparametric (NP). Each classification can be divided into maximum likelihood (ML) or Bayesian (B) approazches. In this paper we discuss the nonparametric case using both maximum likelihood and Bayesian approaches. We present two nonparametric methods for estimating the unknown joint population distribution of model parameter values in a pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a stick-breaking process to construct a Dirichlet prior. Our objective is to compare the performance of these two methods using a simulated PK/PD dataset. Our results showed excellent performance of NPAG and NPB in a realistically simulated PK study. This simulation allowed us to have benchmarks in the form of the true population parameters to compare with the estimates produced by the two methods, while incorporating challenges like unbalanced sample times and sample numbers as well as the ability to include the covariate of patient weight. We conclude that both NPML and NPB can be used in realistic PK/PD population analysis problems. The advantages of one versus the other are discussed in the paper. NPAG and NPB are implemented in R and freely available for download within the Pmetrics package from www.lapk.org. PMID:23404393

  13. An adaptive discretization of incompressible flow using a multitude of moving Cartesian grids

    NASA Astrophysics Data System (ADS)

    English, R. Elliot; Qiu, Linhai; Yu, Yue; Fedkiw, Ronald

    2013-12-01

    We present a novel method for discretizing the incompressible Navier-Stokes equations on a multitude of moving and overlapping Cartesian grids each with an independently chosen cell size to address adaptivity. Advection is handled with first and second order accurate semi-Lagrangian schemes in order to alleviate any time step restriction associated with small grid cell sizes. Likewise, an implicit temporal discretization is used for the parabolic terms including Navier-Stokes viscosity which we address separately through the development of a method for solving the heat diffusion equations. The most intricate aspect of any such discretization is the method used in order to solve the elliptic equation for the Navier-Stokes pressure or that resulting from the temporal discretization of parabolic terms. We address this by first removing any degrees of freedom which duplicately cover spatial regions due to overlapping grids, and then providing a discretization for the remaining degrees of freedom adjacent to these regions. We observe that a robust second order accurate symmetric positive definite readily preconditioned discretization can be obtained by constructing a local Voronoi region on the fly for each degree of freedom in question in order to obtain both its stencil (logically connected neighbors) and stencil weights. Internal curved boundaries such as at solid interfaces are handled using a simple immersed boundary approach which is directly applied to the Voronoi mesh in both the viscosity and pressure solves. We independently demonstrate each aspect of our approach on test problems in order to show efficacy and convergence before finally addressing a number of common test cases for incompressible flow with stationary and moving solid bodies.

  14. Adaptive-Grid Methods for Phase Field Models of Microstructure Development

    NASA Technical Reports Server (NTRS)

    Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.

    1999-01-01

    In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.

  15. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  16. Adaptive multi-grid method for a periodic heterogeneous medium in 1-D

    SciTech Connect

    Fish, J.; Belsky, V.

    1995-12-31

    A multi-grid method for a periodic heterogeneous medium in 1-D is presented. Based on the homogenization theory special intergrid connection operators have been developed to imitate a low frequency response of the differential equations with oscillatory coefficients. The proposed multi-grid method has been proved to have a fast rate of convergence governed by the ratio q/(4-q), where oadaptive multiscale computational scheme is developed. By this technique a computational model entirely constructed on the scale of material heterogeneity is only used where it is necessary to do so, or as indicated by so called Microscale Reduction Error (MRE) indicators, while in the remaining portion of the problem domain, the medium is treated as homogeneous with effective properties. Such a posteriori MRE indicators and estimators are developed on the basis of assessing the validity of two-scale asymptotic expansion.

  17. Application of Open Loop H-Adaptation to an Unstructured Grid Tidal Flat Model

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.

    2008-12-01

    The complex topology of tidal flats presents a challenge to coastal ocean models. Recently, several models have been developed employing unstructured grids, which can provide the flexibility in mesh resolution required to resolve the complex bathymetry and coastline. However, the distribution of element size in the initial mesh can be somewhat arbitrary, and is in general the product of the operator tailoring the resolution to the underlying bathymetry and regions of interest. In this work, the flow solution from an idealized tidal flat application is used to drive an open loop h-adaptation of the mesh. The model used for this work is the Finite Volume Coastal Ocean Model (FVCOM), an open source, terrain following model. A background length scale distribution derived from model output is used to generate a new initial mesh for the model run, thus defining an iteration of the procedure. Several metrics for computing the background length scale will be examined. These include direct estimation of spatial discretization error using Richardson's extrapolation from a sequence of meshes as well as heuristics derived from gradients in the primitive variables. Examination of grid independence, computational efficiency, and performance of the scheme for idealized tidal flats with inclusion of morphodynamics will be discussed.

  18. An Adaptive Digital Image Watermarking Algorithm Based on Morphological Haar Wavelet Transform

    NASA Astrophysics Data System (ADS)

    Huang, Xiaosheng; Zhao, Sujuan

    At present, much more of the wavelet-based digital watermarking algorithms are based on linear wavelet transform and fewer on non-linear wavelet transform. In this paper, we propose an adaptive digital image watermarking algorithm based on non-linear wavelet transform--Morphological Haar Wavelet Transform. In the algorithm, the original image and the watermark image are decomposed with multi-scale morphological wavelet transform respectively. Then the watermark information is adaptively embedded into the original image in different resolutions, combining the features of Human Visual System (HVS). The experimental results show that our method is more robust and effective than the ordinary wavelet transform algorithms.

  19. Comparative study of adaptive-noise-cancellation algorithms for intrusion detection systems

    SciTech Connect

    Claassen, J.P.; Patterson, M.M.

    1981-01-01

    Some intrusion detection systems are susceptible to nonstationary noise resulting in frequent nuisance alarms and poor detection when the noise is present. Adaptive inverse filtering for single channel systems and adaptive noise cancellation for two channel systems have both demonstrated good potential in removing correlated noise components prior detection. For such noise susceptible systems the suitability of a noise reduction algorithm must be established in a trade-off study weighing algorithm complexity against performance. The performance characteristics of several distinct classes of algorithms are established through comparative computer studies using real signals. The relative merits of the different algorithms are discussed in the light of the nature of intruder and noise signals.

  20. Dynamic grid refinement for partial differential equations on parallel computers

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids to provide adaptive resolution and fast solution of PDEs. An asynchronous version of FAC, called AFAC, that completely eliminates the bottleneck to parallelism is presented. This paper describes the advantage that this algorithm has in adaptive refinement for moving singularities on multiprocessor computers. This work is applicable to the parallel solution of two- and three-dimensional shock tracking problems.

  1. Binocular self-calibration performed via adaptive genetic algorithm based on laser line imaging

    NASA Astrophysics Data System (ADS)

    Apolinar Muñoz Rodríguez, J.; Mejía Alanís, Francisco Carlos

    2016-07-01

    An accurate technique to perform binocular self-calibration by means of an adaptive genetic algorithm based on a laser line is presented. In this calibration, the genetic algorithm computes the vision parameters through simulated binary crossover (SBX). To carry it out, the genetic algorithm constructs an objective function from the binocular geometry of the laser line projection. Then, the SBX minimizes the objective function via chromosomes recombination. In this algorithm, the adaptive procedure determines the search space via line position to obtain the minimum convergence. Thus, the chromosomes of vision parameters provide the minimization. The approach of the proposed adaptive genetic algorithm is to calibrate and recalibrate the binocular setup without references and physical measurements. This procedure leads to improve the traditional genetic algorithms, which calibrate the vision parameters by means of references and an unknown search space. It is because the proposed adaptive algorithm avoids errors produced by the missing of references. Additionally, the three-dimensional vision is carried out based on the laser line position and vision parameters. The contribution of the proposed algorithm is corroborated by an evaluation of accuracy of binocular calibration, which is performed via traditional genetic algorithms.

  2. A novel algorithm for real-time adaptive signal detection and identification

    SciTech Connect

    Sleefe, G.E.; Ladd, M.D.; Gallegos, D.E.; Sicking, C.W.; Erteza, I.A.

    1998-04-01

    This paper describes a novel digital signal processing algorithm for adaptively detecting and identifying signals buried in noise. The algorithm continually computes and updates the long-term statistics and spectral characteristics of the background noise. Using this noise model, a set of adaptive thresholds and matched digital filters are implemented to enhance and detect signals that are buried in the noise. The algorithm furthermore automatically suppresses coherent noise sources and adapts to time-varying signal conditions. Signal detection is performed in both the time-domain and the frequency-domain, thereby permitting the detection of both broad-band transients and narrow-band signals. The detection algorithm also provides for the computation of important signal features such as amplitude, timing, and phase information. Signal identification is achieved through a combination of frequency-domain template matching and spectral peak picking. The algorithm described herein is well suited for real-time implementation on digital signal processing hardware. This paper presents the theory of the adaptive algorithm, provides an algorithmic block diagram, and demonstrate its implementation and performance with real-world data. The computational efficiency of the algorithm is demonstrated through benchmarks on specific DSP hardware. The applications for this algorithm, which range from vibration analysis to real-time image processing, are also discussed.

  3. Coupling a local adaptive grid refinement technique with an interface sharpening scheme for the simulation of two-phase flow and free-surface flows using VOF methodology

    NASA Astrophysics Data System (ADS)

    Malgarinos, Ilias; Nikolopoulos, Nikolaos; Gavaises, Manolis

    2015-11-01

    This study presents the implementation of an interface sharpening scheme on the basis of the Volume of Fluid (VOF) method, as well as its application in a number of theoretical and real cases usually modelled in literature. More specifically, the solution of an additional sharpening equation along with the standard VOF model equations is proposed, offering the advantage of "restraining" interface numerical diffusion, while also keeping a quite smooth induced velocity field around the interface. This sharpening equation is solved right after volume fraction advection; however a novel method for its coupling with the momentum equation has been applied in order to save computational time. The advantages of the proposed sharpening scheme lie on the facts that a) it is mass conservative thus its application does not have a negative impact on one of the most important benefits of VOF method and b) it can be used in coarser grids as now the suppression of the numerical diffusion is grid independent. The coupling of the solved equation with an adaptive local grid refinement technique is used for further decrease of computational time, while keeping high levels of accuracy at the area of maximum interest (interface). The numerical algorithm is initially tested against two theoretical benchmark cases for interface tracking methodologies followed by its validation for the case of a free-falling water droplet accelerated by gravity, as well as the normal liquid droplet impingement onto a flat substrate. Results indicate that the coupling of the interface sharpening equation with the HRIC discretization scheme used for volume fraction flux term, not only decreases the interface numerical diffusion, but also allows the induced velocity field to be less perturbed owed to spurious velocities across the liquid-gas interface. With the use of the proposed algorithmic flow path, coarser grids can replace finer ones at the slight expense of accuracy.

  4. Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results

    NASA Technical Reports Server (NTRS)

    Gombosi, Tamas I.; Powell, Kenneth G.; De Zeeuw, Darren L.

    1994-01-01

    The first results of an axisymmetric magnetohydrodynamic (MHD) model of the interaction of an expanding cometary atmosphere with the solar wind are presented. The model assumes that far upstream the plasma flow lines are parallel to the magnetic field vector. The effects of mass loading and ion-neutral friction are taken into account by the governing equations, whcih are solved on an adaptively refined unstructured grid using a Monotone Upstream Centered Schemes for Conservative Laws (MUSCL)-type numerical technique. The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the near nucleus of the comet. The main findingsare the following: (1) A shock is formed approximately = 0.45 Mkm upstream of the comet (its location is controlled by the sonic and Alfvenic Mach numbers of the ambient solar wind flow and by the cometary mass addition rate). (2) A contact surface is formed approximately = 5,600 km upstream of the nucleus separating an outward expanding cometary ionosphere from the nearly stagnating solar wind flow. The location of the contact surface is controlled by the upstream flow conditions, the mass loading rate and the ion-neutral drag. The contact surface is also the boundary of the diamagnetic cavity. (3) A closed inner shock terminates the supersonic expansion of the cometary ionosphere. This inner shock is closer to the nucleus on dayside than on the nightside.

  5. Finite-difference modeling with variable grid-size and adaptive time-step in porous media

    NASA Astrophysics Data System (ADS)

    Liu, Xinxin; Yin, Xingyao; Wu, Guochen

    2014-04-01

    Forward modeling of elastic wave propagation in porous media has great importance for understanding and interpreting the influences of rock properties on characteristics of seismic wavefield. However, the finite-difference forward-modeling method is usually implemented with global spatial grid-size and time-step; it consumes large amounts of computational cost when small-scaled oil/gas-bearing structures or large velocity-contrast exist underground. To overcome this handicap, combined with variable grid-size and time-step, this paper developed a staggered-grid finite-difference scheme for elastic wave modeling in porous media. Variable finite-difference coefficients and wavefield interpolation were used to realize the transition of wave propagation between regions of different grid-size. The accuracy and efficiency of the algorithm were shown by numerical examples. The proposed method is advanced with low computational cost in elastic wave simulation for heterogeneous oil/gas reservoirs.

  6. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  7. Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.

  8. Design and analysis of closed-loop decoder adaptation algorithms for brain-machine interfaces.

    PubMed

    Dangi, Siddharth; Orsborn, Amy L; Moorman, Helene G; Carmena, Jose M

    2013-07-01

    Closed-loop decoder adaptation (CLDA) is an emerging paradigm for achieving rapid performance improvements in online brain-machine interface (BMI) operation. Designing an effective CLDA algorithm requires making multiple important decisions, including choosing the timescale of adaptation, selecting which decoder parameters to adapt, crafting the corresponding update rules, and designing CLDA parameters. These design choices, combined with the specific settings of CLDA parameters, will directly affect the algorithm's ability to make decoder parameters converge to values that optimize performance. In this article, we present a general framework for the design and analysis of CLDA algorithms and support our results with experimental data of two monkeys performing a BMI task. First, we analyze and compare existing CLDA algorithms to highlight the importance of four critical design elements: the adaptation timescale, selective parameter adaptation, smooth decoder updates, and intuitive CLDA parameters. Second, we introduce mathematical convergence analysis using measures such as mean-squared error and KL divergence as a useful paradigm for evaluating the convergence properties of a prototype CLDA algorithm before experimental testing. By applying these measures to an existing CLDA algorithm, we demonstrate that our convergence analysis is an effective analytical tool that can ultimately inform and improve the design of CLDA algorithms. PMID:23607558

  9. The study of key technology on spectral reflectance reconstruction based on the algorithm of adaptive compressive sensing

    NASA Astrophysics Data System (ADS)

    Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang; Zilan, Pan; Dawei, Zhang; Xiuhua, Ma

    2016-04-01

    In order to improve the reconstruction accuracy and reduce the workload, the algorithm of compressive sensing based on the iterative threshold is combined with the method of adaptive selection of the training sample, and a new algorithm of adaptive compressive sensing is put forward. The three kinds of training sample are used to reconstruct the spectral reflectance of the testing sample based on the compressive sensing algorithm and adaptive compressive sensing algorithm, and the color difference and error are compared. The experiment results show that spectral reconstruction precision based on the adaptive compressive sensing algorithm is better than that based on the algorithm of compressive sensing.

  10. Adaption of unstructured meshes using node movement

    SciTech Connect

    Carpenter, J.G.; McRae, V.D.S.

    1996-12-31

    The adaption algorithm of Benson and McRae is modified for application to unstructured grids. The weight function generation was modified for application to unstructured grids and movement was limited to prevent cross over. A NACA 0012 airfoil is used as a test case to evaluate the modified algorithm when applied to unstructured grids and compared to results obtained by Warren. An adaptive mesh solution for the Sudhoo and Hall four element airfoil is included as a demonstration case.

  11. A hybrid adaptive routing algorithm for event-driven wireless sensor networks.

    PubMed

    Figueiredo, Carlos M S; Nakamura, Eduardo F; Loureiro, Antonio A F

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  12. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  13. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    PubMed

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  14. Research of adaptive threshold edge detection algorithm based on statistics canny operator

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Wang, Huaisuo; Huang, Hua

    2015-12-01

    The traditional Canny operator cannot get the optimal threshold in different scene, on this foundation, an improved Canny edge detection algorithm based on adaptive threshold is proposed. The result of the experiment pictures indicate that the improved algorithm can get responsible threshold, and has the better accuracy and precision in the edge detection.

  15. Adaptive Generation of Multimaterial Grids from imaging data for Biomedical Lagrangian Fluid-Structure Simulations

    SciTech Connect

    Carson, James P.; Kuprat, Andrew P.; Jiao, Xiangmin; Dyedov, Volodymyr; del Pin, Facundo; Guccione, Julius M.; Ratcliffe, Mark B.; Einstein, Daniel R.

    2010-04-01

    Spatial discretization of complex imaging-derived fluid-solid geometries, such as the cardiac environment, is a critical but often overlooked challenge in biomechanical computations. This is particularly true in problems with Lagrangian interfaces, where, the fluid and solid phases must match geometrically. For simplicity and better accuracy, it is also highly desirable for the two phases to share the same surface mesh at the interface between them. We outline a method for solving this problem, and illustrate the approach with a 3D fluid-solid mesh of the mouse heart. An MRI perfusion-fixed dataset of a mouse heart with 50μm isotropic resolution was semi-automatically segmented using a customized multimaterial connected-threshold approach that divided the volume into non-overlapping regions of blood, tissue and background. Subsequently, a multimaterial marching cubes algorithm was applied to the segmented data to produce two detailed, compatible isosurfaces, one for blood and one for tissue. Both isosurfaces were simultaneously smoothed with a multimaterial smoothing algorithm that exactly conserves the volume for each phase. Using these two isosurfaces, we developed and applied novel automated meshing algorithms to generate anisotropic hybrid meshes on arbitrary biological geometries with the number of layers and the desired element anisotropy for each phase as the only input parameters. Since our meshes adapt to the local feature sizes and include boundary layer prisms, they are more efficient and accurate than non-adaptive, isotropic meshes, and the fluid-structure interaction computations will tend to have relative error equilibrated over the whole mesh.

  16. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  17. Neural network based adaptive control of nonlinear plants using random search optimization algorithms

    NASA Technical Reports Server (NTRS)

    Boussalis, Dhemetrios; Wang, Shyh J.

    1992-01-01

    This paper presents a method for utilizing artificial neural networks for direct adaptive control of dynamic systems with poorly known dynamics. The neural network weights (controller gains) are adapted in real time using state measurements and a random search optimization algorithm. The results are demonstrated via simulation using two highly nonlinear systems.

  18. Adaptive algorithm for cloud cover estimation from all-sky images over the sea

    NASA Astrophysics Data System (ADS)

    Krinitskiy, M. A.; Sinitsyn, A. V.

    2016-05-01

    A new algorithm for cloud cover estimation has been formulated and developed based on the synthetic control index, called the grayness rate index, and an additional algorithm step of adaptive filtering of the Mie scattering contribution. A setup for automated cloud cover estimation has been designed, assembled, and tested under field conditions. The results shows a significant advantage of the new algorithm over currently commonly used procedures.

  19. Mean-shift tracking algorithm based on adaptive fusion of multi-feature

    NASA Astrophysics Data System (ADS)

    Yang, Kai; Xiao, Yanghui; Wang, Ende; Feng, Junhui

    2015-10-01

    The classic mean-shift tracking algorithm has achieved success in the field of computer vision because of its speediness and efficiency. However, classic mean-shift tracking algorithm would fail to track in some complicated conditions such as some parts of the target are occluded, little color difference between the target and background exists, or sudden change of illumination and so on. In order to solve the problems, an improved algorithm is proposed based on the mean-shift tracking algorithm and adaptive fusion of features. Color, edges and corners of the target are used to describe the target in the feature space, and a method for measuring the discrimination of various features is presented to make feature selection adaptive. Then the improved mean-shift tracking algorithm is introduced based on the fusion of various features. For the purpose of solving the problem that mean-shift tracking algorithm with the single color feature is vulnerable to sudden change of illumination, we eliminate the effects by the fusion of affine illumination model and color feature space which ensures the correctness and stability of target tracking in that condition. Using a group of videos to test the proposed algorithm, the results show that the tracking correctness and stability of this algorithm are better than the mean-shift tracking algorithm with single feature space. Furthermore the proposed algorithm is more robust than the classic algorithm in the conditions of occlusion, target similar with background or illumination change.

  20. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  1. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique.

    PubMed

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  2. An Adaptive Data Collection Algorithm Based on a Bayesian Compressed Sensing Framework

    PubMed Central

    Liu, Zhi; Zhang, Mengmeng; Cui, Jian

    2014-01-01

    For Wireless Sensor Networks, energy efficiency is always a key consideration in system design. Compressed sensing is a new theory which has promising prospects in WSNs. However, how to construct a sparse projection matrix is a problem. In this paper, based on a Bayesian compressed sensing framework, a new adaptive algorithm which can integrate routing and data collection is proposed. By introducing new target node selection metrics, embedding the routing structure and maximizing the differential entropy for each collection round, an adaptive projection vector is constructed. Simulations show that compared to reference algorithms, the proposed algorithm can decrease computation complexity and improve energy efficiency. PMID:24818659

  3. Exact charge-conserving scatter-gather algorithm for particle-in-cell simulations on unstructured grids: A geometric perspective

    NASA Astrophysics Data System (ADS)

    Moon, Haksu; Teixeira, Fernando L.; Omelchenko, Yuri A.

    2015-09-01

    We describe a charge-conserving scatter-gather algorithm for particle-in-cell simulations on unstructured grids. Charge conservation is obtained from first principles, i.e., without the need for any post-processing or correction steps. This algorithm recovers, at a fundamental level, the scatter-gather algorithms presented recently by Campos-Pinto et al. (2014) (to first-order) and by Squire et al. (2012), but it is derived here in a streamlined fashion from a geometric viewpoint. Some ingredients reflecting this viewpoint are (1) the use of (discrete) differential forms of various degrees to represent fields, currents, and charged particles and provide localization rules for the degrees of freedom thereof on the various grid elements (nodes, edges, facets), (2) use of Whitney forms as basic interpolants from discrete differential forms to continuum space, and (3) use of a Galerkin formula for the discrete Hodge star operators (i.e., "mass matrices" incorporating the metric datum of the grid) applicable to generally irregular, unstructured grids. The expressions obtained for the scatter charges and scatter currents are very concise and do not involve numerical quadrature rules. Appropriate fractional areas within each grid element are identified that represent scatter charges and scatter currents within the element, and a simple geometric representation for the (exact) charge conservation mechanism is obtained by such identification. The field update is based on the coupled first-order Maxwell's curl equations to avoid spurious modes with secular growth (otherwise present in formulations that discretize the second-order wave equation). Examples are provided to verify preservation of discrete Gauss' law for all times.

  4. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the space shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  5. Formulation and implementation of nonstationary adaptive estimation algorithm with applications to air-data reconstruction

    NASA Technical Reports Server (NTRS)

    Whitmore, S. A.

    1985-01-01

    The dynamics model and data sources used to perform air-data reconstruction are discussed, as well as the Kalman filter. The need for adaptive determination of the noise statistics of the process is indicated. The filter innovations are presented as a means of developing the adaptive criterion, which is based on the true mean and covariance of the filter innovations. A method for the numerical approximation of the mean and covariance of the filter innovations is presented. The algorithm as developed is applied to air-data reconstruction for the Space Shuttle, and data obtained from the third landing are presented. To verify the performance of the adaptive algorithm, the reconstruction is also performed using a constant covariance Kalman filter. The results of the reconstructions are compared, and the adaptive algorithm exhibits better performance.

  6. Establishing a Dynamic Self-Adaptation Learning Algorithm of the BP Neural Network and Its Applications

    NASA Astrophysics Data System (ADS)

    Li, Xiaofeng; Xiang, Suying; Zhu, Pengfei; Wu, Min

    2015-12-01

    In order to avoid the inherent deficiencies of the traditional BP neural network, such as slow convergence speed, that easily leading to local minima, poor generalization ability and difficulty in determining the network structure, the dynamic self-adaptive learning algorithm of the BP neural network is put forward to improve the function of the BP neural network. The new algorithm combines the merit of principal component analysis, particle swarm optimization, correlation analysis and self-adaptive model, hence can effectively solve the problems of selecting structural parameters, initial connection weights and thresholds and learning rates of the BP neural network. This new algorithm not only reduces the human intervention, optimizes the topological structures of BP neural networks and improves the network generalization ability, but also accelerates the convergence speed of a network, avoids trapping into local minima, and enhances network adaptation ability and prediction ability. The dynamic self-adaptive learning algorithm of the BP neural network is used to forecast the total retail sale of consumer goods of Sichuan Province, China. Empirical results indicate that the new algorithm is superior to the traditional BP network algorithm in predicting accuracy and time consumption, which shows the feasibility and effectiveness of the new algorithm.

  7. Algorithms and architectures for adaptive least squares signal processing, with applications in magnetoencephalography

    SciTech Connect

    Lewis, P.S.

    1988-10-01

    Least squares techniques are widely used in adaptive signal processing. While algorithms based on least squares are robust and offer rapid convergence properties, they also tend to be complex and computationally intensive. To enable the use of least squares techniques in real-time applications, it is necessary to develop adaptive algorithms that are efficient and numerically stable, and can be readily implemented in hardware. The first part of this work presents a uniform development of general recursive least squares (RLS) algorithms, and multichannel least squares lattice (LSL) algorithms. RLS algorithms are developed for both direct estimators, in which a desired signal is present, and for mixed estimators, in which no desired signal is available, but the signal-to-data cross-correlation is known. In the second part of this work, new and more flexible techniques of mapping algorithms to array architectures are presented. These techniques, based on the synthesis and manipulation of locally recursive algorithms (LRAs), have evolved from existing data dependence graph-based approaches, but offer the increased flexibility needed to deal with the structural complexities of the RLS and LSL algorithms. Using these techniques, various array architectures are developed for each of the RLS and LSL algorithms and the associated space/time tradeoffs presented. In the final part of this work, the application of these algorithms is demonstrated by their employment in the enhancement of single-trial auditory evoked responses in magnetoencephalography. 118 refs., 49 figs., 36 tabs.

  8. Computation of shock waves in media with an interphase boundary by the CIP-CUP method on an adaptive grid

    NASA Astrophysics Data System (ADS)

    Guseva, T. S.

    2016-01-01

    A numerical technique of computing shock waves in compressible media with movable deforming interphase boundaries including those of the gas-liquid type has been realized. The approach without explicit separation of the interphase boundary is applied. The CIP-CUP method is used for integrating the equations of gas dynamics. An adaptive grid of special kind (the soroban-grid) is utilized. Some results of testing the technique using one- and two-dimensional problems are given. Results of computation of impact of a jet on a thin liquid layer on a wall are presented.

  9. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Solutions and Applications (PART II)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~2700 nodes and ~3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements

  10. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polish grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.

  11. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript (Frolov et al 2014 New J. Phys. 16 art. no.) , we developed a novel optimization method for the placement, sizing, and operation of flexible alternating current transmission system (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide series compensation (SC) via modification of line inductance. In this sequel manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (˜2700 nodes and ˜3300 lines). The results from the 30-bus network are used to study the general properties of the solutions, including nonlocality and sparsity. The Polish grid is used to demonstrate the computational efficiency of the heuristics that leverage sequential linearization of power flow constraints, and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, we can use the algorithm to solve a Polish transmission grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (i) uniform load growth, (ii) multiple overloaded configurations, and (iii) sequential generator retirements.

  12. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: II. Solutions and applications

    DOE PAGESBeta

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    In a companion manuscript, we developed a novel optimization method for placement, sizing, and operation of Flexible Alternating Current Transmission System (FACTS) devices to relieve transmission network congestion. Specifically, we addressed FACTS that provide Series Compensation (SC) via modification of line inductance. In this manuscript, this heuristic algorithm and its solutions are explored on a number of test cases: a 30-bus test network and a realistically-sized model of the Polish grid (~ 2700 nodes and ~ 3300 lines). The results on the 30-bus network are used to study the general properties of the solutions including non-locality and sparsity. The Polishmore » grid is used as a demonstration of the computational efficiency of the heuristics that leverages sequential linearization of power flow constraints and cutting plane methods that take advantage of the sparse nature of the SC placement solutions. Using these approaches, the algorithm is able to solve an instance of Polish grid in tens of seconds. We explore the utility of the algorithm by analyzing transmission networks congested by (a) uniform load growth, (b) multiple overloaded configurations, and (c) sequential generator retirements.« less

  13. Optimization of Spherical Roller Bearing Design Using Artificial Bee Colony Algorithm and Grid Search Method

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Waghole, Vikas

    2015-07-01

    Bearing standards impose restrictions on the internal geometry of spherical roller bearings. Geometrical and strength constraints conditions have been formulated for the optimization of bearing design. The long fatigue life is one of the most important criteria in the optimum design of bearing. The life is directly proportional to the dynamic capacity; hence, the objective function has been chosen as the maximization of dynamic capacity. The effect of speed and static loads acting on the bearing are also taken into account. Design variables for the bearing include five geometrical parameters: the roller diameter, the roller length, the bearing pitch diameter, the number of rollers, and the contact angle. There are a few design constraint parameters which are also included in the optimization, the bounds of which are obtained by initial runs of the optimization. The optimization program is made to run for different values of these design constraint parameters and a range of the parameters is obtained for which the objective function has a higher value. The artificial bee colony algorithm (ABCA) has been used to solve the constrained optimized problem and the optimum design is compared with the one obtained from the grid search method (GSM), both operating independently. Both the ABCA and the GSM have been finally combined together to reach the global optimum point. A constraint violation study has also been carried out to give priority to the constraint having greater possibility of violations. Optimized bearing designs show a better performance parameter with those specified in bearing catalogs. The sensitivity analysis of bearing parameters has also been carried out to see the effect of manufacturing tolerance on the objective function.

  14. Adaptive inpainting algorithm based on DCT induced wavelet regularization.

    PubMed

    Li, Yan-Ran; Shen, Lixin; Suter, Bruce W

    2013-02-01

    In this paper, we propose an image inpainting optimization model whose objective function is a smoothed l(1) norm of the weighted nondecimated discrete cosine transform (DCT) coefficients of the underlying image. By identifying the objective function of the proposed model as a sum of a differentiable term and a nondifferentiable term, we present a basic algorithm inspired by Beck and Teboulle's recent work on the model. Based on this basic algorithm, we propose an automatic way to determine the weights involved in the model and update them in each iteration. The DCT as an orthogonal transform is used in various applications. We view the rows of a DCT matrix as the filters associated with a multiresolution analysis. Nondecimated wavelet transforms with these filters are explored in order to analyze the images to be inpainted. Our numerical experiments verify that under the proposed framework, the filters from a DCT matrix demonstrate promise for the task of image inpainting. PMID:23060331

  15. A General Hybrid Radiation Transport Scheme for Star Formation Simulations on an Adaptive Grid

    NASA Astrophysics Data System (ADS)

    Klassen, Mikhail; Kuiper, Rolf; Pudritz, Ralph E.; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-01

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  16. Features of CPB: A Poisson-Boltzmann Solver that Uses an Adaptive Cartesian Grid

    PubMed Central

    Harris, Robert C.; Mackoy, Travis

    2014-01-01

    The capabilities of an adaptive Cartesian grid (ACG)-based Poisson-Boltzmann (PB) solver (CPB) are demonstrated. CPB solves various PB equations with an ACG, built from a hierarchical octree decomposition of the computational domain. This procedure decreases the number of points required, thereby reducing computational demands. Inside the molecule, CPB solves for the reaction-field component (ϕrf) of the electrostatic potential (ϕ), eliminating the charge-induced singularities in ϕ. CPB can also use a least-squares reconstruction method to improve estimates of ϕ at the molecular surface. All surfaces, which include solvent excluded, Gaussians and others, are created analytically, eliminating errors associated with triangulated surfaces. These features allow CPB to produce detailed surface maps of ϕ and compute polar solvation and binding free energies for large biomolecular assemblies, such as ribosomes and viruses, with reduced computational demands compared to other PBE solvers. The reader is referred to http://www.continuum-dynamics.com/solution-mm.html for how to obtain the CPB software. PMID:25430617

  17. A general hybrid radiation transport scheme for star formation simulations on an adaptive grid

    SciTech Connect

    Klassen, Mikhail; Pudritz, Ralph E.; Kuiper, Rolf; Peters, Thomas; Banerjee, Robi; Buntemeyer, Lars

    2014-12-10

    Radiation feedback plays a crucial role in the process of star formation. In order to simulate the thermodynamic evolution of disks, filaments, and the molecular gas surrounding clusters of young stars, we require an efficient and accurate method for solving the radiation transfer problem. We describe the implementation of a hybrid radiation transport scheme in the adaptive grid-based FLASH general magnetohydrodyanmics code. The hybrid scheme splits the radiative transport problem into a raytracing step and a diffusion step. The raytracer captures the first absorption event, as stars irradiate their environments, while the evolution of the diffuse component of the radiation field is handled by a flux-limited diffusion solver. We demonstrate the accuracy of our method through a variety of benchmark tests including the irradiation of a static disk, subcritical and supercritical radiative shocks, and thermal energy equilibration. We also demonstrate the capability of our method for casting shadows and calculating gas and dust temperatures in the presence of multiple stellar sources. Our method enables radiation-hydrodynamic studies of young stellar objects, protostellar disks, and clustered star formation in magnetized, filamentary environments.

  18. Features of CPB: a Poisson-Boltzmann solver that uses an adaptive Cartesian grid.

    PubMed

    Fenley, Marcia O; Harris, Robert C; Mackoy, Travis; Boschitsch, Alexander H

    2015-02-01

    The capabilities of an adaptive Cartesian grid (ACG)-based Poisson-Boltzmann (PB) solver (CPB) are demonstrated. CPB solves various PB equations with an ACG, built from a hierarchical octree decomposition of the computational domain. This procedure decreases the number of points required, thereby reducing computational demands. Inside the molecule, CPB solves for the reaction-field component (ϕrf ) of the electrostatic potential (ϕ), eliminating the charge-induced singularities in ϕ. CPB can also use a least-squares reconstruction method to improve estimates of ϕ at the molecular surface. All surfaces, which include solvent excluded, Gaussians, and others, are created analytically, eliminating errors associated with triangulated surfaces. These features allow CPB to produce detailed surface maps of ϕ and compute polar solvation and binding free energies for large biomolecular assemblies, such as ribosomes and viruses, with reduced computational demands compared to other Poisson-Boltzmann equation solvers. The reader is referred to http://www.continuum-dynamics.com/solution-mm.html for how to obtain the CPB software. PMID:25430617

  19. The Multi Level Multi Domain (MLMD) method: a semi-implicit adaptive algorithm for Particle In Cell plasma simulations

    NASA Astrophysics Data System (ADS)

    Innocenti, Maria Elena; Beck, Arnaud; Markidis, Stefano; Lapenta, Giovanni

    2013-10-01

    Particle in Cell (PIC) simulations of plasmas are not bound anymore by the stability constraints of explicit algorithms. Semi implicit and fully implicit methods allow to use larger grid spacings and time steps. Adaptive Mesh Refinement (AMR) techniques permit to locally change the simulation resolution. The code proposed in Innocenti et al., 2013 and Beck et al., 2013 is however the first to combine the advantages of both. The use of the Implicit Moment Method allows to taylor the resolution used in each level to the physical scales of interest and to use high Refinement Factors (RF) between the levels. The Multi Level Multi Domain (MLMD) structure, where all levels are simulated as complete domains, conjugates algorithmic and practical advantages. The different levels evolve according to the local dynamics and achieve optimal level interlocking. Also, the capabilities of the Object Oriented programming model are fully exploited. The MLMD algorithm is demonstrated with magnetic reconnection and collisionless shocks simulations with very high RFs between the levels. Notable computational gains are achieved with respect to simulations performed on the entire domain with the higher resolution. Beck A. et al. (2013). submitted. Innocenti M. E. et al. (2013). JCP, 238(0):115-140.

  20. Simulation of Biochemical Pathway Adaptability Using Evolutionary Algorithms

    SciTech Connect

    Bosl, W J

    2005-01-26

    The systems approach to genomics seeks quantitative and predictive descriptions of cells and organisms. However, both the theoretical and experimental methods necessary for such studies still need to be developed. We are far from understanding even the simplest collective behavior of biomolecules, cells or organisms. A key aspect to all biological problems, including environmental microbiology, evolution of infectious diseases, and the adaptation of cancer cells is the evolvability of genomes. This is particularly important for Genomes to Life missions, which tend to focus on the prospect of engineering microorganisms to achieve desired goals in environmental remediation and climate change mitigation, and energy production. All of these will require quantitative tools for understanding the evolvability of organisms. Laboratory biodefense goals will need quantitative tools for predicting complicated host-pathogen interactions and finding counter-measures. In this project, we seek to develop methods to simulate how external and internal signals cause the genetic apparatus to adapt and organize to produce complex biochemical systems to achieve survival. This project is specifically directed toward building a computational methodology for simulating the adaptability of genomes. This project investigated the feasibility of using a novel quantitative approach to studying the adaptability of genomes and biochemical pathways. This effort was intended to be the preliminary part of a larger, long-term effort between key leaders in computational and systems biology at Harvard University and LLNL, with Dr. Bosl as the lead PI. Scientific goals for the long-term project include the development and testing of new hypotheses to explain the observed adaptability of yeast biochemical pathways when the myosin-II gene is deleted and the development of a novel data-driven evolutionary computation as a way to connect exploratory computational simulation with hypothesis

  1. Training Recurrent Neural Networks With the Levenberg-Marquardt Algorithm for Optimal Control of a Grid-Connected Converter.

    PubMed

    Fu, Xingang; Li, Shuhui; Fairbank, Michael; Wunsch, Donald C; Alonso, Eduardo

    2015-09-01

    This paper investigates how to train a recurrent neural network (RNN) using the Levenberg-Marquardt (LM) algorithm as well as how to implement optimal control of a grid-connected converter (GCC) using an RNN. To successfully and efficiently train an RNN using the LM algorithm, a new forward accumulation through time (FATT) algorithm is proposed to calculate the Jacobian matrix required by the LM algorithm. This paper explores how to incorporate FATT into the LM algorithm. The results show that the combination of the LM and FATT algorithms trains RNNs better than the conventional backpropagation through time algorithm. This paper presents an analytical study on the optimal control of GCCs, including theoretically ideal optimal and suboptimal controllers. To overcome the inapplicability of the optimal GCC controller under practical conditions, a new RNN controller with an improved input structure is proposed to approximate the ideal optimal controller. The performance of an ideal optimal controller and a well-trained RNN controller was compared in close to real-life power converter switching environments, demonstrating that the proposed RNN controller can achieve close to ideal optimal control performance even under low sampling rate conditions. The excellent performance of the proposed RNN controller under challenging and distorted system conditions further indicates the feasibility of using an RNN to approximate optimal control in practical applications. PMID:25330496

  2. Sequential Insertion Heuristic with Adaptive Bee Colony Optimisation Algorithm for Vehicle Routing Problem with Time Windows

    PubMed Central

    Jawarneh, Sana; Abdullah, Salwani

    2015-01-01

    This paper presents a bee colony optimisation (BCO) algorithm to tackle the vehicle routing problem with time window (VRPTW). The VRPTW involves recovering an ideal set of routes for a fleet of vehicles serving a defined number of customers. The BCO algorithm is a population-based algorithm that mimics the social communication patterns of honeybees in solving problems. The performance of the BCO algorithm is dependent on its parameters, so the online (self-adaptive) parameter tuning strategy is used to improve its effectiveness and robustness. Compared with the basic BCO, the adaptive BCO performs better. Diversification is crucial to the performance of the population-based algorithm, but the initial population in the BCO algorithm is generated using a greedy heuristic, which has insufficient diversification. Therefore the ways in which the sequential insertion heuristic (SIH) for the initial population drives the population toward improved solutions are examined. Experimental comparisons indicate that the proposed adaptive BCO-SIH algorithm works well across all instances and is able to obtain 11 best results in comparison with the best-known results in the literature when tested on Solomon’s 56 VRPTW 100 customer instances. Also, a statistical test shows that there is a significant difference between the results. PMID:26132158

  3. Adaptive Sampling Algorithms for Probabilistic Risk Assessment of Nuclear Simulations

    SciTech Connect

    Diego Mandelli; Dan Maljovec; Bei Wang; Valerio Pascucci; Peer-Timo Bremer

    2013-09-01

    Nuclear simulations are often computationally expensive, time-consuming, and high-dimensional with respect to the number of input parameters. Thus exploring the space of all possible simulation outcomes is infeasible using finite computing resources. During simulation-based probabilistic risk analysis, it is important to discover the relationship between a potentially large number of input parameters and the output of a simulation using as few simulation trials as possible. This is a typical context for performing adaptive sampling where a few observations are obtained from the simulation, a surrogate model is built to represent the simulation space, and new samples are selected based on the model constructed. The surrogate model is then updated based on the simulation results of the sampled points. In this way, we attempt to gain the most information possible with a small number of carefully selected sampled points, limiting the number of expensive trials needed to understand features of the simulation space. We analyze the specific use case of identifying the limit surface, i.e., the boundaries in the simulation space between system failure and system success. In this study, we explore several techniques for adaptively sampling the parameter space in order to reconstruct the limit surface. We focus on several adaptive sampling schemes. First, we seek to learn a global model of the entire simulation space using prediction models or neighborhood graphs and extract the limit surface as an iso-surface of the global model. Second, we estimate the limit surface by sampling in the neighborhood of the current estimate based on topological segmentations obtained locally. Our techniques draw inspirations from topological structure known as the Morse-Smale complex. We highlight the advantages and disadvantages of using a global prediction model versus local topological view of the simulation space, comparing several different strategies for adaptive sampling in both

  4. Generating Composite Overlapping Grids on CAD Geometries

    SciTech Connect

    Henshaw, W.D.

    2002-02-07

    We describe some algorithms and tools that have been developed to generate composite overlapping grids on geometries that have been defined with computer aided design (CAD) programs. This process consists of five main steps. Starting from a description of the surfaces defining the computational domain we (1) correct errors in the CAD representation, (2) determine topology of the patched-surface, (3) build a global triangulation of the surface, (4) construct structured surface and volume grids using hyperbolic grid generation, and (5) generate the overlapping grid by determining the holes and the interpolation points. The overlapping grid generator which is used for the final step also supports the rapid generation of grids for block-structured adaptive mesh refinement and for moving grids. These algorithms have been implemented as part of the Overture object-oriented framework.

  5. Application of an unstructured grid algorithm to artificial heart valve simulations.

    PubMed

    Hsu, A T; Yun, J X; Hwang, N H

    1999-01-01

    The time varying flow pattern in the vicinity of mechanical heart valves (MHV) is fairly complex: it involves multiple passages and moving leaflets. The numeric simulation of unsteady flows in these multiple passages with moving boundaries presents a major challenge to computational fluid dynamics (CFD). Two major difficulties in the numeric simulation of MHV flows are 1) the generation of a body fitted grid within the multipassage device and 2) moving leaflets. The conventional finite difference and finite volume scheme obtained by using a structured grid have serious deficiencies in these applications. To fit the grid lines with the various angles of the moving MHV, the grid may often become too skewed for accurate numeric solution. To overcome these deficiencies, significant effort and attention should be placed on the grid generation and moving grid scheme. We present an unstructured moving grid finite volume method for heart valve simulations. The Navier-Stokes equations are discretized on a general tetrahedral mesh by using a finite volume scheme. With this scheme, the mesh can be automatically generated with any commercial software. The method is applied to a tilting disk (Medtronic Hall 29mm, Medtronic, Inc., Minneapolis, MN) heart valve, and results are compared with that of the steady flow solutions. Significant differences between steady and unsteady flow solutions are observed. PMID:10593690

  6. Adaptive optics image deconvolution based on a modified Richardson-Lucy algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Geng, Ze-xun; Yan, Xiao-dong; Yang, Yang; Sui, Xue-lian; Zhao, Zhen-lei

    2007-12-01

    Adaptive optical (AO) system provides a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The Richardson-Lucy (R-L) Algorithm is the technique most widely used for AO image deconvolution, but Standard R-L Algorithm (SRLA) is often puzzled by speckling phenomenon, wraparound artifact and noise problem. A Modified R-L Algorithm (MRLA) for AO image deconvolution is presented. This novel algorithm applies Magain's correct sampling approach and incorporating noise statistics to Standard R-L Algorithm. The alternant iterative method is applied to estimate PSF and object in the novel algorithm. Comparing experiments for indoor data and AO image are done with SRLA and the MRLA in this paper. Experimental results show that this novel MRLA outperforms the SRLA.

  7. A low order flow/acoustics interaction method for the prediction of sound propagation using 3D adaptive hybrid grids

    SciTech Connect

    Kallinderis, Yannis; Vitsas, Panagiotis A.; Menounou, Penelope

    2012-07-15

    A low-order flow/acoustics interaction method for the prediction of sound propagation and diffraction in unsteady subsonic compressible flow using adaptive 3-D hybrid grids is investigated. The total field is decomposed into the flow field described by the Euler equations, and the acoustics part described by the Nonlinear Perturbation Equations. The method is shown capable of predicting monopole sound propagation, while employment of acoustics-guided adapted grid refinement improves the accuracy of capturing the acoustic field. Interaction of sound with solid boundaries is also examined in terms of reflection, and diffraction. Sound propagation through an unsteady flow field is examined using static and dynamic flow/acoustics coupling demonstrating the importance of the latter.

  8. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    SciTech Connect

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  9. Grid generation and adaptation for the Direct Simulation Monte Carlo Method. [for complex flows past wedges and cones

    NASA Technical Reports Server (NTRS)

    Olynick, David P.; Hassan, H. A.; Moss, James N.

    1988-01-01

    A grid generation and adaptation procedure based on the method of transfinite interpolation is incorporated into the Direct Simulation Monte Carlo Method of Bird. In addition, time is advanced based on a local criterion. The resulting procedure is used to calculate steady flows past wedges and cones. Five chemical species are considered. In general, the modifications result in a reduced computational effort. Moreover, preliminary results suggest that the simulation method is time step dependent if requirements on cell sizes are not met.

  10. Adaptive control and noise suppression by a variable-gain gradient algorithm

    NASA Technical Reports Server (NTRS)

    Merhav, S. J.; Mehta, R. S.

    1987-01-01

    An adaptive control system based on normalized LMS filters is investigated. The finite impulse response of the nonparametric controller is adaptively estimated using a given reference model. Specifically, the following issues are addressed: The stability of the closed loop system is analyzed and heuristically established. Next, the adaptation process is studied for piecewise constant plant parameters. It is shown that by introducing a variable-gain in the gradient algorithm, a substantial reduction in the LMS adaptation rate can be achieved. Finally, process noise at the plant output generally causes a biased estimate of the controller. By introducing a noise suppression scheme, this bias can be substantially reduced and the response of the adapted system becomes very close to that of the reference model. Extensive computer simulations validate these and demonstrate assertions that the system can rapidly adapt to random jumps in plant parameters.

  11. Performance study of LMS based adaptive algorithms for unknown system identification

    SciTech Connect

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  12. Detection of Human Impacts by an Adaptive Energy-Based Anisotropic Algorithm

    PubMed Central

    Prado-Velasco, Manuel; Ortiz Marín, Rafael; del Rio Cidoncha, Gloria

    2013-01-01

    Boosted by health consequences and the cost of falls in the elderly, this work develops and tests a novel algorithm and methodology to detect human impacts that will act as triggers of a two-layer fall monitor. The two main requirements demanded by socio-healthcare providers—unobtrusiveness and reliability—defined the objectives of the research. We have demonstrated that a very agile, adaptive, and energy-based anisotropic algorithm can provide 100% sensitivity and 78% specificity, in the task of detecting impacts under demanding laboratory conditions. The algorithm works together with an unsupervised real-time learning technique that addresses the adaptive capability, and this is also presented. The work demonstrates the robustness and reliability of our new algorithm, which will be the basis of a smart falling monitor. This is shown in this work to underline the relevance of the results. PMID:24157505

  13. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  14. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  15. Computations of two passing-by high-speed trains by a relaxation overset-grid algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Jenn-Long

    2004-04-01

    This paper presents a relaxation algorithm, which is based on the overset grid technology, an unsteady three-dimensional Navier-Stokes flow solver, and an inner- and outer-relaxation method, for simulation of the unsteady flows of moving high-speed trains. The flow solutions on the overlapped grids can be accurately updated by introducing a grid tracking technique and the inner- and outer-relaxation method. To evaluate the capability and solution accuracy of the present algorithm, the computational static pressure distribution of a single stationary TGV high-speed train inside a long tunnel is investigated numerically, and is compared with the experimental data from low-speed wind tunnel test. Further, the unsteady flows of two TGV high-speed trains passing by each other inside a long tunnel and at the tunnel entrance are simulated. A series of time histories of pressure distributions and aerodynamic loads acting on the train and tunnel surfaces are depicted for detailed discussions.

  16. Large spatial, temporal, and algorithmic adaptivity for implicit nonlinear finite element analysis

    SciTech Connect

    Engelmann, B.E.; Whirley, R.G.

    1992-07-30

    The development of effective solution strategies to solve the global nonlinear equations which arise in implicit finite element analysis has been the subject of much research in recent years. Robust algorithms are needed to handle the complex nonlinearities that arise in many implicit finite element applications such as metalforming process simulation. The authors experience indicates that robustness can best be achieved through adaptive solution strategies. In the course of their research, this adaptivity and flexibility has been refined into a production tool through the development of a solution control language called ISLAND. This paper discusses aspects of adaptive solution strategies including iterative procedures to solve the global equations and remeshing techniques to extend the domain of Lagrangian methods. Examples using the newly developed ISLAND language are presented to illustrate the advantages of embedding temporal, algorithmic, and spatial adaptivity in a modem implicit nonlinear finite element analysis code.

  17. QoS Differential Scheduling in Cognitive-Radio-Based Smart Grid Networks: An Adaptive Dynamic Programming Approach.

    PubMed

    Yu, Rong; Zhong, Weifeng; Xie, Shengli; Zhang, Yan; Zhang, Yun

    2016-02-01

    As the next-generation power grid, smart grid will be integrated with a variety of novel communication technologies to support the explosive data traffic and the diverse requirements of quality of service (QoS). Cognitive radio (CR), which has the favorable ability to improve the spectrum utilization, provides an efficient and reliable solution for smart grid communications networks. In this paper, we study the QoS differential scheduling problem in the CR-based smart grid communications networks. The scheduler is responsible for managing the spectrum resources and arranging the data transmissions of smart grid users (SGUs). To guarantee the differential QoS, the SGUs are assigned to have different priorities according to their roles and their current situations in the smart grid. Based on the QoS-aware priority policy, the scheduler adjusts the channels allocation to minimize the transmission delay of SGUs. The entire transmission scheduling problem is formulated as a semi-Markov decision process and solved by the methodology of adaptive dynamic programming. A heuristic dynamic programming (HDP) architecture is established for the scheduling problem. By the online network training, the HDP can learn from the activities of primary users and SGUs, and adjust the scheduling decision to achieve the purpose of transmission delay minimization. Simulation results illustrate that the proposed priority policy ensures the low transmission delay of high priority SGUs. In addition, the emergency data transmission delay is also reduced to a significantly low level, guaranteeing the differential QoS in smart grid. PMID:25910254

  18. Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Ianculescu, G. D.; Klop, J. J.

    1992-01-01

    Classical and adaptive control algorithms for the solar array pointing system of the Space Station Freedom are designed using a continuous rigid body model of the solar array gimbal assembly containing both linear and nonlinear dynamics due to various friction components. The robustness of the design solution is examined by performing a series of sensitivity analysis studies. Adaptive control strategies are examined in order to compensate for the unfavorable effect of static nonlinearities, such as dead-zone uncertainties.

  19. An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content.

    PubMed

    Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng

    2016-01-01

    The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable. PMID:27051998

  20. An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content

    PubMed Central

    Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng

    2016-01-01

    The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable. PMID:27051998

  1. An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng

    2016-04-01

    The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable.

  2. Adaptive motion artifact reducing algorithm for wrist photoplethysmography application

    NASA Astrophysics Data System (ADS)

    Zhao, Jingwei; Wang, Guijin; Shi, Chenbo

    2016-04-01

    Photoplethysmography (PPG) technology is widely used in wearable heart pulse rate monitoring. It might reveal the potential risks of heart condition and cardiopulmonary function by detecting the cardiac rhythms in physical exercise. However the quality of wrist photoelectric signal is very sensitive to motion artifact since the thicker tissues and the fewer amount of capillaries. Therefore, motion artifact is the major factor that impede the heart rate measurement in the high intensity exercising. One accelerometer and three channels of light with different wavelengths are used in this research to analyze the coupled form of motion artifact. A novel approach is proposed to separate the pulse signal from motion artifact by exploiting their mixing ratio in different optical paths. There are four major steps of our method: preprocessing, motion artifact estimation, adaptive filtering and heart rate calculation. Five healthy young men are participated in the experiment. The speeder in the treadmill is configured as 12km/h, and all subjects would run for 3-10 minutes by swinging the arms naturally. The final result is compared with chest strap. The average of mean square error (MSE) is less than 3 beats per minute (BPM/min). Proposed method performed well in intense physical exercise and shows the great robustness to individuals with different running style and posture.

  3. Evaluation of an adaptive filtering algorithm for CT cardiac imaging with EKG modulated tube current

    NASA Astrophysics Data System (ADS)

    Li, Jianying; Hsieh, Jiang; Mohr, Kelly; Okerlund, Darin

    2005-04-01

    We have developed an adaptive filtering algorithm for cardiac CT scans with EKG-modulated tube current to optimize resolution and noise for different cardiac phases and to provide safety net for cases where end-systole phase is used for coronary imaging. This algorithm has been evaluated using patient cardiac CT scans where lower tube currents are used for the systolic phases. In this paper, we present the evaluation results. The results demonstrated that with the use of the proposed algorithm, we could improve image quality for all cardiac phases, while providing greater noise and streak artifact reduction for systole phases where lower CT dose were used.

  4. An adaptive discretization of compressible flow using a multitude of moving Cartesian grids

    NASA Astrophysics Data System (ADS)

    Qiu, Linhai; Lu, Wenlong; Fedkiw, Ronald

    2016-01-01

    We present a novel method for simulating compressible flow on a multitude of Cartesian grids that can rotate and translate. Following previous work, we split the time integration into an explicit step for advection followed by an implicit solve for the pressure. A second order accurate flux based scheme is devised to handle advection on each moving Cartesian grid using an effective characteristic velocity that accounts for the grid motion. In order to avoid the stringent time step restriction imposed by very fine grids, we propose strategies that allow for a fluid velocity CFL number larger than 1. The stringent time step restriction related to the sound speed is alleviated by formulating an implicit linear system in order to find a pressure consistent with the equation of state. This implicit linear system crosses overlapping Cartesian grid boundaries by utilizing local Voronoi meshes to connect the various degrees of freedom obtaining a symmetric positive-definite system. Since a straightforward application of this technique contains an inherent central differencing which can result in spurious oscillations, we introduce a new high order diffusion term similar in spirit to ENO-LLF but solved for implicitly in order to avoid any associated time step restrictions. The method is conservative on each grid, as well as globally conservative on the background grid that contains all other grids. Moreover, a conservative interpolation operator is devised for conservatively remapping values in order to keep them consistent across different overlapping grids. Additionally, the method is extended to handle two-way solid fluid coupling in a monolithic fashion including cases (in the appendix) where solids in close proximity do not properly allow for grid based degrees of freedom in between them.

  5. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  6. STAR adaptation of QR algorithm. [program for solving over-determined systems of linear equations

    NASA Technical Reports Server (NTRS)

    Shah, S. N.

    1981-01-01

    The QR algorithm used on a serial computer and executed on the Control Data Corporation 6000 Computer was adapted to execute efficiently on the Control Data STAR-100 computer. How the scalar program was adapted for the STAR-100 and why these adaptations yielded an efficient STAR program is described. Program listings of the old scalar version and the vectorized SL/1 version are presented in the appendices. Execution times for the two versions applied to the same system of linear equations, are compared.

  7. An adaptive algorithm for removing the blocking artifacts in block-transform coded images

    NASA Astrophysics Data System (ADS)

    Yang, Jingzhong; Ma, Zheng

    2005-11-01

    JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.

  8. An adaptive ant colony system algorithm for continuous-space optimization problems.

    PubMed

    Li, Yan-jun; Wu, Tie-jun

    2003-01-01

    Ant colony algorithms comprise a novel category of evolutionary computation methods for optimization problems, especially for sequencing-type combinatorial optimization problems. An adaptive ant colony algorithm is proposed in this paper to tackle continuous-space optimization problems, using a new objective-function-based heuristic pheromone assignment approach for pheromone update to filtrate solution candidates. Global optimal solutions can be reached more rapidly by self-adjusting the path searching behaviors of the ants according to objective values. The performance of the proposed algorithm is compared with a basic ant colony algorithm and a Square Quadratic Programming approach in solving two benchmark problems with multiple extremes. The results indicated that the efficiency and reliability of the proposed algorithm were greatly improved. PMID:12656341

  9. Riemannian mean and space-time adaptive processing using projection and inversion algorithms

    NASA Astrophysics Data System (ADS)

    Balaji, Bhashyam; Barbaresco, Frédéric

    2013-05-01

    The estimation of the covariance matrix from real data is required in the application of space-time adaptive processing (STAP) to an airborne ground moving target indication (GMTI) radar. A natural approach to estimation of the covariance matrix that is based on the information geometry has been proposed. In this paper, the output of the Riemannian mean is used in inversion and projection algorithms. It is found that the projection class of algorithms can yield very significant gains, even when the gains due to inversion-based algorithms are marginal over standard algorithms. The performance of the projection class of algorithms does not appear to be overly sensitive to the projected subspace dimension.

  10. New hybrid adaptive neuro-fuzzy algorithms for manipulator control with uncertainties- comparative study.

    PubMed

    Alavandar, Srinivasan; Nigam, M J

    2009-10-01

    Control of an industrial robot includes nonlinearities, uncertainties and external perturbations that should be considered in the design of control laws. In this paper, some new hybrid adaptive neuro-fuzzy control algorithms (ANFIS) have been proposed for manipulator control with uncertainties. These hybrid controllers consist of adaptive neuro-fuzzy controllers and conventional controllers. The outputs of these controllers are applied to produce the final actuation signal based on current position and velocity errors. Numerical simulation using the dynamic model of six DOF puma robot arm with uncertainties shows the effectiveness of the approach in trajectory tracking problems. Performance indices of RMS error, maximum error are used for comparison. It is observed that the hybrid adaptive neuro-fuzzy controllers perform better than only conventional/adaptive controllers and in particular hybrid controller structure consisting of adaptive neuro-fuzzy controller and critically damped inverse dynamics controller. PMID:19523623

  11. Identification of robust adaptation gene regulatory network parameters using an improved particle swarm optimization algorithm.

    PubMed

    Huang, X N; Ren, H P

    2016-01-01

    Robust adaptation is a critical ability of gene regulatory network (GRN) to survive in a fluctuating environment, which represents the system responding to an input stimulus rapidly and then returning to its pre-stimulus steady state timely. In this paper, the GRN is modeled using the Michaelis-Menten rate equations, which are highly nonlinear differential equations containing 12 undetermined parameters. The robust adaption is quantitatively described by two conflicting indices. To identify the parameter sets in order to confer the GRNs with robust adaptation is a multi-variable, multi-objective, and multi-peak optimization problem, which is difficult to acquire satisfactory solutions especially high-quality solutions. A new best-neighbor particle swarm optimization algorithm is proposed to implement this task. The proposed algorithm employs a Latin hypercube sampling method to generate the initial population. The particle crossover operation and elitist preservation strategy are also used in the proposed algorithm. The simulation results revealed that the proposed algorithm could identify multiple solutions in one time running. Moreover, it demonstrated a superior performance as compared to the previous methods in the sense of detecting more high-quality solutions within an acceptable time. The proposed methodology, owing to its universality and simplicity, is useful for providing the guidance to design GRN with superior robust adaptation. PMID:27323043

  12. Prediction and Control of Network Cascade: Example of Power Grid or Networking Adaptability from WMD Disruption and Cascading Failures

    SciTech Connect

    Chertkov, Michael

    2012-07-24

    The goal of the DTRA project is to develop a mathematical framework that will provide the fundamental understanding of network survivability, algorithms for detecting/inferring pre-cursors of abnormal network behaviors, and methods for network adaptability and self-healing from cascading failures.

  13. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  14. The design of a parallel adaptive paving all-quadrilateral meshing algorithm

    SciTech Connect

    Tautges, T.J.; Lober, R.R.; Vaughan, C.

    1995-08-01

    Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.

  15. Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman

    2011-12-01

    In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.

  16. Longest jobs first algorithm in solving job shop scheduling using adaptive genetic algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Alizadeh Sahzabi, Vahid; Karimi, Iman; Alizadeh Sahzabi, Navid; Mamaani Barnaghi, Peiman

    2012-01-01

    In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total processing time and the method improved 4% the accuracy of performing all jobs in the presented example.

  17. Adaptive switching detection algorithm for iterative-MIMO systems to enable power savings

    NASA Astrophysics Data System (ADS)

    Tadza, N.; Laurenson, D.; Thompson, J. S.

    2014-11-01

    This paper attempts to tackle one of the challenges faced in soft input soft output Multiple Input Multiple Output (MIMO) detection systems, which is to achieve optimal error rate performance with minimal power consumption. This is realized by proposing a new algorithm design that comprises multiple thresholds within the detector that, in real time, specify the receiver behavior according to the current channel in both slow and fast fading conditions, giving it adaptivity. This adaptivity enables energy savings within the system since the receiver chooses whether to accept or to reject the transmission, according to the success rate of detecting thresholds. The thresholds are calculated using the mutual information of the instantaneous channel conditions between the transmitting and receiving antennas of iterative-MIMO systems. In addition, the power saving technique, Dynamic Voltage and Frequency Scaling, helps to reduce the circuit power demands of the adaptive algorithm. This adaptivity has the potential to save up to 30% of the total energy when it is implemented on Xilinx®Virtex-5 simulation hardware. Results indicate the benefits of having this "intelligence" in the adaptive algorithm due to the promising performance-complexity tradeoff parameters in both software and hardware codesign simulation.

  18. Optimized adaptation algorithm for HEVC/H.265 dynamic adaptive streaming over HTTP using variable segment duration

    NASA Astrophysics Data System (ADS)

    Irondi, Iheanyi; Wang, Qi; Grecos, Christos

    2016-04-01

    Adaptive video streaming using HTTP has become popular in recent years for commercial video delivery. The recent MPEG-DASH standard allows interoperability and adaptability between servers and clients from different vendors. The delivery of the MPD (Media Presentation Description) files in DASH and the DASH client behaviours are beyond the scope of the DASH standard. However, the different adaptation algorithms employed by the clients do affect the overall performance of the system and users' QoE (Quality of Experience), hence the need for research in this field. Moreover, standard DASH delivery is based on fixed segments of the video. However, there is no standard segment duration for DASH where various fixed segment durations have been employed by different commercial solutions and researchers with their own individual merits. Most recently, the use of variable segment duration in DASH has emerged but only a few preliminary studies without practical implementation exist. In addition, such a technique requires a DASH client to be aware of segment duration variations, and this requirement and the corresponding implications on the DASH system design have not been investigated. This paper proposes a segment-duration-aware bandwidth estimation and next-segment selection adaptation strategy for DASH. Firstly, an MPD file extension scheme to support variable segment duration is proposed and implemented in a realistic hardware testbed. The scheme is tested on a DASH client, and the tests and analysis have led to an insight on the time to download next segment and the buffer behaviour when fetching and switching between segments of different playback durations. Issues like sustained buffering when switching between segments of different durations and slow response to changing network conditions are highlighted and investigated. An enhanced adaptation algorithm is then proposed to accurately estimate the bandwidth and precisely determine the time to download the next

  19. A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm

    PubMed Central

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  20. A high fuel consumption efficiency management scheme for PHEVs using an adaptive genetic algorithm.

    PubMed

    Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah

    2015-01-01

    A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974

  1. Knowledge-Aided Multichannel Adaptive SAR/GMTI Processing: Algorithm and Experimental Results

    NASA Astrophysics Data System (ADS)

    Wu, Di; Zhu, Daiyin; Zhu, Zhaoda

    2010-12-01

    The multichannel synthetic aperture radar ground moving target indication (SAR/GMTI) technique is a simplified implementation of space-time adaptive processing (STAP), which has been proved to be feasible in the past decades. However, its detection performance will be degraded in heterogeneous environments due to the rapidly varying clutter characteristics. Knowledge-aided (KA) STAP provides an effective way to deal with the nonstationary problem in real-world clutter environment. Based on the KA STAP methods, this paper proposes a KA algorithm for adaptive SAR/GMTI processing in heterogeneous environments. It reduces sample support by its fast convergence properties and shows robust to non-stationary clutter distribution relative to the traditional adaptive SAR/GMTI scheme. Experimental clutter suppression results are employed to verify the virtue of this algorithm.

  2. A self-adaptive genetic algorithm to estimate JA model parameters considering minor loops

    NASA Astrophysics Data System (ADS)

    Lu, Hai-liang; Wen, Xi-shan; Lan, Lei; An, Yun-zhu; Li, Xiao-ping

    2015-01-01

    A self-adaptive genetic algorithm for estimating Jiles-Atherton (JA) magnetic hysteresis model parameters is presented. The fitness function is established based on the distances between equidistant key points of normalized hysteresis loops. Linearity function and logarithm function are both adopted to code the five parameters of JA model. Roulette wheel selection is used and the selection pressure is adjusted adaptively by deducting a proportional which depends on current generation common value. The Crossover operator is established by combining arithmetic crossover and multipoint crossover. Nonuniform mutation is improved by adjusting the mutation ratio adaptively. The algorithm is used to estimate the parameters of one kind of silicon-steel sheet's hysteresis loops, and the results are in good agreement with published data.

  3. A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion)

    NASA Astrophysics Data System (ADS)

    Srinivasan, Nirmala; Ganeshan, A. S.; Mishra, Saumyaketu

    2012-07-01

    A New Grid based Ionosphere Algorithm for GAGAN using Data Fusion Technique (ISRO GIVE Model-Multi Layer Data Fusion) Saumyaketu Mishra, Nirmala S, A S Ganeshan ISRO Satellite Centre, Bangalore and Timothy Schempp, Gregory Um, Hans Habereder Raytheon Company Development of a region-specific ionosphere model is the key element in providing precision approach services for civil aviation with GAGAN (GPS Aided GEO Augmented Navigation). GAGAN is an Indian SBAS (Space Based Augmentation System) comprising of three segments; space segment (GEO and GPS), ground segment (15 Indian reference stations (INRES), 2 master control centers and 3 ground uplink stations) and user segment. The GAGAN system is intended to provide air navigation services for APV 1/1.5 precision approach over the Indian land mass and RNP 0.1 navigation service over Indian Flight Information Region (FIR), conforming to the standards of GNSS ICAO-SARPS. Ionosphere being largest source of error is of prime concern for a SBAS. India is a low latitude country, posing challenges for grid based ionosphere algorithm development; large spatial and temporal gradients, Equatorial anomaly, Depletions (bubbles), Scintillations etc. To meet the required GAGAN performance, it is necessary to develop and implement a best suitable ionosphere model, applicable for the Indian region as thin shell models like planar does not meet the requirement. ISRO GIVE Model - Multi Layer Data Fusion (IGM-MLDF) employs an innovative approach for computing the ionosphere corrections and confidences at pre-defined grid points at 350 Km shell height. Ionosphere variations over the Geo-magnetic equatorial regions shows peak electron density shell height variations from 200 km to 500 km, so single thin shell assumption at 350 km is not valid over Indian region. Hence IGM-MLDF employs innovative scheme of modeling at two shell heights. Through empirical analysis the shell heights of 250 km and 450 km are chosen. The ionosphere measurement

  4. A MULTIPLE GRID ALGORITHM FOR ONE-DIMENSIONAL TRANSIENT OPEN CHANNEL FLOWS. (R825200)

    EPA Science Inventory

    Numerical modeling of open channel flows with shocks using explicit finite difference schemes is constrained by the choice of time step, which is limited by the CFL stability criteria. To overcome this limitation, in this work we introduce the application of a multiple grid al...

  5. Analysis of grid performance using an optical flow algorithm for medical image processing

    NASA Astrophysics Data System (ADS)

    Moreno, Ramon A.; Cunha, Rita de Cássio Porfírio; Gutierrez, Marco A.

    2014-03-01

    The development of bigger and faster computers has not yet provided the computing power for medical image processing required nowadays. This is the result of several factors, including: i) the increasing number of qualified medical image users requiring sophisticated tools; ii) the demand for more performance and quality of results; iii) researchers are addressing problems that were previously considered extremely difficult to achieve; iv) medical images are produced with higher resolution and on a larger number. These factors lead to the need of exploring computing techniques that can boost the computational power of Healthcare Institutions while maintaining a relative low cost. Parallel computing is one of the approaches that can help solving this problem. Parallel computing can be achieved using multi-core processors, multiple processors, Graphical Processing Units (GPU), clusters or Grids. In order to gain the maximum benefit of parallel computing it is necessary to write specific programs for each environment or divide the data in smaller subsets. In this article we evaluate the performance of the two parallel computing tools when dealing with a medical image processing application. We compared the performance of the EELA-2 (E-science grid facility for Europe and Latin- America) grid infrastructure with a small Cluster (3 nodes x 8 cores = 24 cores) and a regular PC (Intel i3 - 2 cores). As expected the grid had a better performance for a large number of processes, the cluster for a small to medium number of processes and the PC for few processes.

  6. A self adaptive hybrid enhanced artificial bee colony algorithm for continuous optimization problems.

    PubMed

    Shan, Hai; Yasuda, Toshiyuki; Ohkura, Kazuhiro

    2015-06-01

    The artificial bee colony (ABC) algorithm is one of popular swarm intelligence algorithms that inspired by the foraging behavior of honeybee colonies. To improve the convergence ability, search speed of finding the best solution and control the balance between exploration and exploitation using this approach, we propose a self adaptive hybrid enhanced ABC algorithm in this paper. To evaluate the performance of standard ABC, best-so-far ABC (BsfABC), incremental ABC (IABC), and the proposed ABC algorithms, we implemented numerical optimization problems based on the IEEE Congress on Evolutionary Computation (CEC) 2014 test suite. Our experimental results show the comparative performance of standard ABC, BsfABC, IABC, and the proposed ABC algorithms. According to the results, we conclude that the proposed ABC algorithm is competitive to those state-of-the-art modified ABC algorithms such as BsfABC and IABC algorithms based on the benchmark problems defined by CEC 2014 test suite with dimension sizes of 10, 30, and 50, respectively. PMID:25982071

  7. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    SciTech Connect

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm, designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.

  8. Transform Domain Robust Variable Step Size Griffiths' Adaptive Algorithm for Noise Cancellation in ECG

    NASA Astrophysics Data System (ADS)

    Hegde, Veena; Deekshit, Ravishankar; Satyanarayana, P. S.

    2011-12-01

    The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality of ECG is utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts or noise. Noise severely limits the utility of the recorded ECG and thus needs to be removed, for better clinical evaluation. In the present paper a new noise cancellation technique is proposed for removal of random noise like muscle artifact from ECG signal. A transform domain robust variable step size Griffiths' LMS algorithm (TVGLMS) is proposed for noise cancellation. For the TVGLMS, the robust variable step size has been achieved by using the Griffiths' gradient which uses cross-correlation between the desired signal contaminated with observation or random noise and the input. The algorithm is discrete cosine transform (DCT) based and uses symmetric property of the signal to represent the signal in frequency domain with lesser number of frequency coefficients when compared to that of discrete Fourier transform (DFT). The algorithm is implemented for adaptive line enhancer (ALE) filter which extracts the ECG signal in a noisy environment using LMS filter adaptation. The proposed algorithm is found to have better convergence error/misadjustment when compared to that of ordinary transform domain LMS (TLMS) algorithm, both in the presence of white/colored observation noise. The reduction in convergence error achieved by the new algorithm with desired signal decomposition is found to be lower than that obtained without decomposition. The experimental results indicate that the proposed method is better than traditional adaptive filter using LMS algorithm in the aspects of retaining geometrical characteristics of ECG signal.

  9. Application of multi-objective controller to optimal tuning of PID gains for a hydraulic turbine regulating system using adaptive grid particle swam optimization.

    PubMed

    Chen, Zhihuan; Yuan, Yanbin; Yuan, Xiaohui; Huang, Yuehua; Li, Xianshan; Li, Wenwu

    2015-05-01

    A hydraulic turbine regulating system (HTRS) is one of the most important components of hydropower plant, which plays a key role in maintaining safety, stability and economical operation of hydro-electrical installations. At present, the conventional PID controller is widely applied in the HTRS system for its practicability and robustness, and the primary problem with respect to this control law is how to optimally tune the parameters, i.e. the determination of PID controller gains for satisfactory performance. In this paper, a kind of multi-objective evolutionary algorithms, named adaptive grid particle swarm optimization (AGPSO) is applied to solve the PID gains tuning problem of the HTRS system. This newly AGPSO optimized method, which differs from a traditional one-single objective optimization method, is designed to take care of settling time and overshoot level simultaneously, in which a set of non-inferior alternatives solutions (i.e. Pareto solution) is generated. Furthermore, a fuzzy-based membership value assignment method is employed to choose the best compromise solution from the obtained Pareto set. An illustrative example associated with the best compromise solution for parameter tuning of the nonlinear HTRS system is introduced to verify the feasibility and the effectiveness of the proposed AGPSO-based optimization approach, as compared with two another prominent multi-objective algorithms, i.e. Non-dominated Sorting Genetic Algorithm II (NSGAII) and Strength Pareto Evolutionary Algorithm II (SPEAII), for the quality and diversity of obtained Pareto solutions set. Consequently, simulation results show that this AGPSO optimized approach outperforms than compared methods with higher efficiency and better quality no matter whether the HTRS system works under unload or load conditions. PMID:25481821

  10. Efficient Algorithm for Locating and Sizing Series Compensation Devices in Large Transmission Grids: Model Implementation (PART 1)

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael

    2014-01-14

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.

  11. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    SciTech Connect

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.

  12. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    DOE PAGESBeta

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-24

    We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less

  13. Efficient algorithm for locating and sizing series compensation devices in large power transmission grids: I. Model implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha

    2014-10-01

    We explore optimization methods for planning the placement, sizing and operations of flexible alternating current transmission system (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to series compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of linear programs (LP) that are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically sized networks that suffer congestion from a range of causes, including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically sized network.

  14. Error bounds of adaptive dynamic programming algorithms for solving undiscounted optimal control problems.

    PubMed

    Liu, Derong; Li, Hongliang; Wang, Ding

    2015-06-01

    In this paper, we establish error bounds of adaptive dynamic programming algorithms for solving undiscounted infinite-horizon optimal control problems of discrete-time deterministic nonlinear systems. We consider approximation errors in the update equations of both value function and control policy. We utilize a new assumption instead of the contraction assumption in discounted optimal control problems. We establish the error bounds for approximate value iteration based on a new error condition. Furthermore, we also establish the error bounds for approximate policy iteration and approximate optimistic policy iteration algorithms. It is shown that the iterative approximate value function can converge to a finite neighborhood of the optimal value function under some conditions. To implement the developed algorithms, critic and action neural networks are used to approximate the value function and control policy, respectively. Finally, a simulation example is given to demonstrate the effectiveness of the developed algorithms. PMID:25751878

  15. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  16. The algorithm analysis on non-uniformity correction based on LMS adaptive filtering

    NASA Astrophysics Data System (ADS)

    Zhan, Dongjun; Wang, Qun; Wang, Chensheng; Chen, Huawang

    2010-11-01

    The traditional least mean square (LMS) algorithm has the performance of good adaptivity to noise, but there are several disadvantages in the traditional LMS algorithm, such as the defect in desired value of pending pixels, undetermined original coefficients, which result in slow convergence speed and long convergence period. Method to solve the desired value of pending pixel has improved based on these problems, also, the correction gain and offset coefficients worked out by the method of two-point temperature non-uniformity correction (NUC) as the original coefficients, which has improved the convergence speed. The simulation with real infrared images has proved that the new LMS algorithm has the advantages of better correction effect. Finally, the algorithm is implemented on the hardware structure of FPGA+DSP.

  17. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    NASA Astrophysics Data System (ADS)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  18. The design of flux-corrected transport (FCT) algorithms on structured grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    2005-12-01

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow field; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order algorithms, in flux form, in the various regions of the flow field. In this dissertation, we describe a set of design principles that significantly enhance the accuracy and robustness of FCT algorithms by enhancing the accuracy and robustness of each of the three components individually. These principles include the use of very high order spatial operators in the design of the high order fluxes, the use of non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. We show via standard test problems the kind of algorithm performance one can expect if these design principles are adhered to. We give examples of applications of these design principles in several areas of physics. Finally, we compare the performance of these enhanced algorithms with that of other recent front-capturing methods.

  19. An adaptive metamodel-based global optimization algorithm for black-box type problems

    NASA Astrophysics Data System (ADS)

    Jie, Haoxiang; Wu, Yizhong; Ding, Jianwan

    2015-11-01

    In this article, an adaptive metamodel-based global optimization (AMGO) algorithm is presented to solve unconstrained black-box problems. In the AMGO algorithm, a type of hybrid model composed of kriging and augmented radial basis function (RBF) is used as the surrogate model. The weight factors of hybrid model are adaptively selected in the optimization process. To balance the local and global search, a sub-optimization problem is constructed during each iteration to determine the new iterative points. As numerical experiments, six standard two-dimensional test functions are selected to show the distributions of iterative points. The AMGO algorithm is also tested on seven well-known benchmark optimization problems and contrasted with three representative metamodel-based optimization methods: efficient global optimization (EGO), GutmannRBF and hybrid and adaptive metamodel (HAM). The test results demonstrate the efficiency and robustness of the proposed method. The AMGO algorithm is finally applied to the structural design of the import and export chamber of a cycloid gear pump, achieving satisfactory results.

  20. A new adaptive merging and growing algorithm for designing artificial neural networks.

    PubMed

    Islam, Md Monirul; Sattar, Md Abdus; Amin, Md Faijul; Yao, Xin; Murase, Kazuyuki

    2009-06-01

    This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms. PMID:19203888

  1. MAGNETIC GRID

    DOEpatents

    Post, R.F.

    1960-08-01

    An electronic grid is designed employing magnetic forces for controlling the passage of charged particles. The grid is particularly applicable to use in gas-filled tubes such as ignitrons. thyratrons, etc., since the magnetic grid action is impartial to the polarity of the charged particles and, accordingly. the sheath effects encountered with electrostatic grids are not present. The grid comprises a conductor having sections spaced apart and extending in substantially opposite directions in the same plane, the ends of the conductor being adapted for connection to a current source.

  2. Implicit/Multigrid Algorithms for Incompressible Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Rausch, Russ D.; Bonhaus, Daryl L.

    1997-01-01

    An implicit code for computing inviscid and viscous incompressible flows on unstructured grids is described. The foundation of the code is a backward Euler time discretization for which the linear system is approximately solved at each time step with either a point implicit method or a preconditioned Generalized Minimal Residual (GMRES) technique. For the GMRES calculations, several techniques are investigated for forming the matrix-vector product. Convergence acceleration is achieved through a multigrid scheme that uses non-nested coarse grids that are generated using a technique described in the present paper. Convergence characteristics are investigated and results are compared with an exact solution for the inviscid flow over a four-element airfoil. Viscous results, which are compared with experimental data, include the turbulent flow over a NACA 4412 airfoil, a three-element airfoil for which Mach number effects are investigated, and three-dimensional flow over a wing with a partial-span flap.

  3. A Note on the Relationship Between Adaptive AMG and PCG

    SciTech Connect

    Falgout, R D

    2004-08-06

    In this note, we will show that preconditioned conjugate gradients (PCG) can be viewed as a particular adaptive algebraic multi-grid algorithm (adaptive AMG). The relationship between these two methods provides important insight into the construction of effective adaptive AMG algorithms.

  4. The Design of Flux-Corrected Transport (FCT) Algorithms For Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: 1) a high order algorithm to which it reduces in smooth parts of the flow; 2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and 3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy.

  5. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  6. Dependence of adaptive cross-correlation algorithm performance on the extended scene image quality

    NASA Astrophysics Data System (ADS)

    Sidick, Erkin

    2008-08-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  7. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  8. Adaptive vector quantization of MR images using online k-means algorithm

    NASA Astrophysics Data System (ADS)

    Shademan, Azad; Zia, Mohammad A.

    2001-12-01

    The k-means algorithm is widely used to design image codecs using vector quantization (VQ). In this paper, we focus on an adaptive approach to implement a VQ technique using the online version of k-means algorithm, in which the size of the codebook is adapted continuously to the statistical behavior of the image. Based on the statistical analysis of the feature space, a set of thresholds are designed such that those codewords corresponding to the low-density clusters would be removed from the codebook and hence, resulting in a higher bit-rate efficiency. Applications of this approach would be in telemedicine, where sequences of highly correlated medical images, e.g. consecutive brain slices, are transmitted over a low bit-rate channel. We have applied this algorithm on magnetic resonance (MR) images and the simulation results on a sample sequence are given. The proposed method has been compared to the standard k-means algorithm in terms of PSNR, MSE, and elapsed time to complete the algorithm.

  9. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  10. Algorithm for localized adaptive diffuse optical tomography and its application in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Naser, Mohamed A.; Patterson, Michael S.; Wong, John W.

    2014-04-01

    A reconstruction algorithm for diffuse optical tomography based on diffusion theory and finite element method is described. The algorithm reconstructs the optical properties in a permissible domain or region-of-interest to reduce the number of unknowns. The algorithm can be used to reconstruct optical properties for a segmented object (where a CT-scan or MRI is available) or a non-segmented object. For the latter, an adaptive segmentation algorithm merges contiguous regions with similar optical properties thereby reducing the number of unknowns. In calculating the Jacobian matrix the algorithm uses an efficient direct method so the required time is comparable to that needed for a single forward calculation. The reconstructed optical properties using segmented, non-segmented, and adaptively segmented 3D mouse anatomy (MOBY) are used to perform bioluminescence tomography (BLT) for two simulated internal sources. The BLT results suggest that the accuracy of reconstruction of total source power obtained without the segmentation provided by an auxiliary imaging method such as x-ray CT is comparable to that obtained when using perfect segmentation.

  11. Adaptive analog-SSOR iterative method for solving grid equations with nonselfadjoint operators

    NASA Astrophysics Data System (ADS)

    Alekseenko, Elena; Sukhinov, Alexander; Chistyakov, Alexander; Shishenya, Alexander; Roux, Bernard

    2013-04-01

    Motion models of wave processes in the coastal zone are highly demanded in the projection and construction of coastal surface structures and breakwaters, and also as a component of other models. The most common of the grid approaches is currently vof-method. A significant drawback of this method is in the necessity to solve the convection equation to find fullness of cells. The numerical solution of this equation leads to a strong grid viscosity and "smearing" of the interface. In this paper, we propose a method, which is based on the idea of using a fill, as in vof method, but its conversion is not required to solve the equation of convection. Thus in this work, a mathematical model for the wave hydrodynamics problem, describing wash ashore and taking into account such physical parameters as turbulent exchange, complexity of domain and coastal line geometry, and bottom friction is developed. For the given mathematical model a discrete model is constructed, taking into account dynamical changing of the calculation domain. Discretization of the model is performed on the structured rectangular grid with a new developed finite-volume technique that takes into account fullness of the grid cells that allows describing geometry more accurate. Proposed technique allows improving the real accuracy of a solution in case of complex domain geometry, by improving approximation of the boundary. A software implementation and numerical experiments of the posed problem of the wave hydrodynamics is performed. The results of numerical experiments show the feasibility of using discrete mathematical models of processes that take into account fullness of grid cells, for the simulation of systems with complex geometry of the border. Numerical experiments show that the use of this technique sufficiently smooth solutions are obtained even on coarse grids.

  12. Grid generation strategies for turbomachinery configurations

    NASA Astrophysics Data System (ADS)

    Lee, K. D.; Henderson, T. L.

    1991-01-01

    Turbomachinery flow fields involve unique grid generation issues due to their geometrical and physical characteristics. Several strategic approaches are discussed to generate quality grids. The grid quality is further enhanced through blending and adapting. Grid blending smooths the grids locally through averaging and diffusion operators. Grid adaptation redistributes the grid points based on a grid quality assessment. These methods are demonstrated with several examples.

  13. Optimizing weather radar observations using an adaptive multiquadric surface fitting algorithm

    NASA Astrophysics Data System (ADS)

    Martens, Brecht; Cabus, Pieter; De Jongh, Inge; Verhoest, Niko

    2013-04-01

    Real time forecasting of river flow is an essential tool in operational water management. Such real time modelling systems require well calibrated models which can make use of spatially distributed rainfall observations. Weather radars provide spatial data, however, since radar measurements are sensitive to a large range of error sources, often a discrepancy between radar observations and ground-based measurements, which are mostly considered as ground truth, can be observed. Through merging ground observations with the radar product, often referred to as data merging, one may force the radar observations to better correspond to the ground-based measurements, without losing the spatial information. In this paper, radar images and ground-based measurements of rainfall are merged based on interpolated gauge-adjustment factors (Moore et al., 1998; Cole and Moore, 2008) or scaling factors. Using the following equation, scaling factors (C(xα)) are calculated at each position xα where a gauge measurement (Ig(xα)) is available: Ig(xα)+-? C (xα) = Ir(xα)+ ? (1) where Ir(xα) is the radar-based observation in the pixel overlapping the rain gauge and ? is a constant making sure the scaling factor can be calculated when Ir(xα) is zero. These scaling factors are interpolated on the radar grid, resulting in a unique scaling factor for each pixel. Multiquadric surface fitting is used as an interpolation algorithm (Hardy, 1971): C*(x0) = aTv + a0 (2) where C*(x0) is the prediction at location x0, the vector a (Nx1, with N the number of ground-based measurements used) and the constant a0 parameters describing the surface and v an Nx1 vector containing the (Euclidian) distance between each point xα used in the interpolation and the point x0. The parameters describing the surface are derived by forcing the surface to be an exact interpolator and impose that the sum of the parameters in a should be zero. However, often, the surface is allowed to pass near the observations (i

  14. Adaptive numerical methods for partial differential equations

    SciTech Connect

    Cololla, P.

    1995-07-01

    This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.

  15. An Adaptive Evolutionary Algorithm for Traveling Salesman Problem with Precedence Constraints

    PubMed Central

    Sung, Jinmo; Jeong, Bongju

    2014-01-01

    Traveling sales man problem with precedence constraints is one of the most notorious problems in terms of the efficiency of its solution approach, even though it has very wide range of industrial applications. We propose a new evolutionary algorithm to efficiently obtain good solutions by improving the search process. Our genetic operators guarantee the feasibility of solutions over the generations of population, which significantly improves the computational efficiency even when it is combined with our flexible adaptive searching strategy. The efficiency of the algorithm is investigated by computational experiments. PMID:24701158

  16. Algorithme d'adaptation du filtre de Kalman aux variations soudaines de bruit

    NASA Astrophysics Data System (ADS)

    Canciu, Vintila

    This research targets the case of Kalman filtering as applied to linear time-invariant systems having unknown process noise covariance and measurement noise covariance matrices and addresses the problem represented by the incomplete a priori knowledge of these two filter initialization parameters. The goal of this research is to determine in realtime both the process covariance matrix and the noise covariance matrix in the context of adaptive Kalman filtering. The resultant filter, called evolutionary adaptive Kalman filter, is able to adapt to sudden noise variations and constitutes a hybrid solution for adaptive Kalman filtering based on metaheuristic algorithms. MATLAB/Simulink simulation using several processes and covariance matrices plus comparison with other filters was selected as validation method. The Cramer-Rae Lower Bound (CRLB) was used as performance criterion. The thesis begins with a description of the problem under consideration (the design of a Kalman filter that is able to adapt to sudden noise variations) followed by a typical application (INS-GPS integrated navigation system) and by a statistical analysis of publications related to adaptive Kalman filtering. Next, the thesis presents the current architectures of the adaptive Kalman filtering: the innovation adaptive estimator (IAE) and the multiple model adaptive estimator (MMAE). It briefly presents their formulation, their behavior, and the limit of their performances. The thesis continues with the architectural synthesis of the evolutionary adaptive Kalman filter. The steps involved in the solution of the problem under consideration is also presented: an analysis of Kalman filtering and sub-optimal filtering methods, a comparison of current adaptive Kalman and sub-optimal filtering methods, the emergence of evolutionary adaptive Kalman filter as an enrichment of sub-optimal filtering with the help of biological-inspired computational intelligence methods, and the step-by-step architectural

  17. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    USGS Publications Warehouse

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  18. Anisotropic optical flow algorithm based on self-adaptive cellular neural network

    NASA Astrophysics Data System (ADS)

    Zhang, Congxuan; Chen, Zhen; Li, Ming; Sun, Kaiqiong

    2013-01-01

    An anisotropic optical flow estimation method based on self-adaptive cellular neural networks (CNN) is proposed. First, a novel optical flow energy function which contains a robust data term and an anisotropic smoothing term is projected. Next, the CNN model which has the self-adaptive feedback operator and threshold is presented according to the Euler-Lagrange partial differential equations of the proposed optical flow energy function. Finally, the elaborate evaluation experiments indicate the significant effects of the various proposed strategies for optical flow estimation, and the comparison results with the other methods show that the proposed algorithm has better performance in computing accuracy and efficiency.

  19. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  20. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm.

    PubMed

    Hesterman, Jacob Y; Caucci, Luca; Kupinski, Matthew A; Barrett, Harrison H; Furenlid, Lars R

    2010-06-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  1. A robust face recognition algorithm under varying illumination using adaptive retina modeling

    NASA Astrophysics Data System (ADS)

    Cheong, Yuen Kiat; Yap, Vooi Voon; Nisar, Humaira

    2013-10-01

    Variation in illumination has a drastic effect on the appearance of a face image. This may hinder the automatic face recognition process. This paper presents a novel approach for face recognition under varying lighting conditions. The proposed algorithm uses adaptive retina modeling based illumination normalization. In the proposed approach, retina modeling is employed along with histogram remapping following normal distribution. Retina modeling is an approach that combines two adaptive nonlinear equations and a difference of Gaussians filter. Two databases: extended Yale B database and CMU PIE database are used to verify the proposed algorithm. For face recognition Gabor Kernel Fisher Analysis method is used. Experimental results show that the recognition rate for the face images with different illumination conditions has improved by the proposed approach. Average recognition rate for Extended Yale B database is 99.16%. Whereas, the recognition rate for CMU-PIE database is 99.64%.

  2. A Study on Adapting the Zoom FFT Algorithm to Automotive Millimetre Wave Radar

    NASA Astrophysics Data System (ADS)

    Kuroda, Hiroshi; Takano, Kazuaki

    The millimetre wave radar has been developed for automotive application such as ACC (Adaptive Cruise Control) and CWS (Collision Warning System). The radar uses MMIC (Monolithic Microwave Integrated Circuits) devices for transmitting and receiving 76 GHz millimetre wave signals. The radar is FSK (Frequency Shift Keying) monopulse type. The radar transmits 2 frequencies in time-duplex manner, and measures distance and relative speed of targets. The monopulse feature detects the azimuth angle of targets without a scanning mechanism. The Zoom FFT (Fast Fourier Transform) algorithm, which analyses frequency domain precisely, has adapted to the radar for discriminating multiple stationary targets. The Zoom FFT algorithm is evaluated in test truck. The evaluation results show good performance on discriminating two stationary vehicles in host lane and adjacent lane.

  3. A modified Richardson-Lucy algorithm for single image with adaptive reference maps

    NASA Astrophysics Data System (ADS)

    Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2014-06-01

    In this paper, we propose a modified non-blind Richardson-Lucy algorithm using adaptive reference maps as local constraint to reduce noise and ringing artifacts effectively. The deconvolution process can be divided into two stages. In the first deblurring stage, the reference map is estimated from the blurred image and an intermediate deblurred result is obtained. And then the adaptive reference map is updated according to both the blurred image and the deblurred result of the first stage to produce a more accurate edge description, which is very helpful to suppress the ringing around edges. Gaussian image prior is adopted as the regularization to improve the standard Richardson-Lucy algorithm. Experimental results show that the presented approach could suppress the negative ringing artifacts effectively as well as preserve the edge information, even if the blurred image contains rich textures.

  4. A DVH-guided IMRT optimization algorithm for automatic treatment planning and adaptive radiotherapy replanning

    SciTech Connect

    Zarepisheh, Masoud; Li, Nan; Long, Troy; Romeijn, H. Edwin; Tian, Zhen; Jia, Xun; Jiang, Steve B.

    2014-06-15

    Purpose: To develop a novel algorithm that incorporates prior treatment knowledge into intensity modulated radiation therapy optimization to facilitate automatic treatment planning and adaptive radiotherapy (ART) replanning. Methods: The algorithm automatically creates a treatment plan guided by the DVH curves of a reference plan that contains information on the clinician-approved dose-volume trade-offs among different targets/organs and among different portions of a DVH curve for an organ. In ART, the reference plan is the initial plan for the same patient, while for automatic treatment planning the reference plan is selected from a library of clinically approved and delivered plans of previously treated patients with similar medical conditions and geometry. The proposed algorithm employs a voxel-based optimization model and navigates the large voxel-based Pareto surface. The voxel weights are iteratively adjusted to approach a plan that is similar to the reference plan in terms of the DVHs. If the reference plan is feasible but not Pareto optimal, the algorithm generates a Pareto optimal plan with the DVHs better than the reference ones. If the reference plan is too restricting for the new geometry, the algorithm generates a Pareto plan with DVHs close to the reference ones. In both cases, the new plans have similar DVH trade-offs as the reference plans. Results: The algorithm was tested using three patient cases and found to be able to automatically adjust the voxel-weighting factors in order to generate a Pareto plan with similar DVH trade-offs as the reference plan. The algorithm has also been implemented on a GPU for high efficiency. Conclusions: A novel prior-knowledge-based optimization algorithm has been developed that automatically adjust the voxel weights and generate a clinical optimal plan at high efficiency. It is found that the new algorithm can significantly improve the plan quality and planning efficiency in ART replanning and automatic treatment

  5. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  6. Research on Novel Algorithms for Smart Grid Reliability Assessment and Economic Dispatch

    NASA Astrophysics Data System (ADS)

    Luo, Wenjin

    In this dissertation, several studies of electric power system reliability and economy assessment methods are presented. To be more precise, several algorithms in evaluating power system reliability and economy are studied. Furthermore, two novel algorithms are applied to this field and their simulation results are compared with conventional results. As the electrical power system develops towards extra high voltage, remote distance, large capacity and regional networking, the application of a number of new technique equipments and the electric market system have be gradually established, and the results caused by power cut has become more and more serious. The electrical power system needs the highest possible reliability due to its complication and security. In this dissertation the Boolean logic Driven Markov Process (BDMP) method is studied and applied to evaluate power system reliability. This approach has several benefits. It allows complex dynamic models to be defined, while maintaining its easy readability as conventional methods. This method has been applied to evaluate IEEE reliability test system. The simulation results obtained are close to IEEE experimental data which means that it could be used for future study of the system reliability. Besides reliability, modern power system is expected to be more economic. This dissertation presents a novel evolutionary algorithm named as quantum evolutionary membrane algorithm (QEPS), which combines the concept and theory of quantum-inspired evolutionary algorithm and membrane computation, to solve the economic dispatch problem in renewable power system with on land and offshore wind farms. The case derived from real data is used for simulation tests. Another conventional evolutionary algorithm is also used to solve the same problem for comparison. The experimental results show that the proposed method is quick and accurate to obtain the optimal solution which is the minimum cost for electricity supplied by wind

  7. A novel adaptive, real-time algorithm to detect gait events from wearable sensors.

    PubMed

    Chia Bejarano, Noelia; Ambrosini, Emilia; Pedrocchi, Alessandra; Ferrigno, Giancarlo; Monticone, Marco; Ferrante, Simona

    2015-05-01

    A real-time, adaptive algorithm based on two inertial and magnetic sensors placed on the shanks was developed for gait-event detection. For each leg, the algorithm detected the Initial Contact (IC), as the minimum of the flexion/extension angle, and the End Contact (EC) and the Mid-Swing (MS), as minimum and maximum of the angular velocity, respectively. The algorithm consisted of calibration, real-time detection, and step-by-step update. Data collected from 22 healthy subjects (21 to 85 years) walking at three self-selected speeds were used to validate the algorithm against the GaitRite system. Comparable levels of accuracy and significantly lower detection delays were achieved with respect to other published methods. The algorithm robustness was tested on ten healthy subjects performing sudden speed changes and on ten stroke subjects (43 to 89 years). For healthy subjects, F1-scores of 1 and mean detection delays lower than 14 ms were obtained. For stroke subjects, F1-scores of 0.998 and 0.944 were obtained for IC and EC, respectively, with mean detection delays always below 31 ms. The algorithm accurately detected gait events in real time from a heterogeneous dataset of gait patterns and paves the way for the design of closed-loop controllers for customized gait trainings and/or assistive devices. PMID:25069118

  8. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-08-01

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes' rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  9. Adaptive time stepping algorithm for Lagrangian transport models: Theory and idealised test cases

    NASA Astrophysics Data System (ADS)

    Shah, Syed Hyder Ali Muttaqi; Heemink, Arnold Willem; Gräwe, Ulf; Deleersnijder, Eric

    2013-08-01

    Random walk simulations have an excellent potential in marine and oceanic modelling. This is essentially due to their relative simplicity and their ability to represent advective transport without being plagued by the deficiencies of the Eulerian methods. The physical and mathematical foundations of random walk modelling of turbulent diffusion have become solid over the years. Random walk models rest on the theory of stochastic differential equations. Unfortunately, the latter and the related numerical aspects have not attracted much attention in the oceanic modelling community. The main goal of this paper is to help bridge the gap by developing an efficient adaptive time stepping algorithm for random walk models. Its performance is examined on two idealised test cases of turbulent dispersion; (i) pycnocline crossing and (ii) non-flat isopycnal diffusion, which are inspired by shallow-sea dynamics and large-scale ocean transport processes, respectively. The numerical results of the adaptive time stepping algorithm are compared with the fixed-time increment Milstein scheme, showing that the adaptive time stepping algorithm for Lagrangian random walk models is more efficient than its fixed step-size counterpart without any loss in accuracy.

  10. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    SciTech Connect

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle these challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.

  11. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  12. An adaptive importance sampling algorithm for Bayesian inversion with multimodal distributions

    DOE PAGESBeta

    Li, Weixuan; Lin, Guang

    2015-03-21

    Parametric uncertainties are encountered in the simulations of many physical systems, and may be reduced by an inverse modeling procedure that calibrates the simulation results to observations on the real system being simulated. Following Bayes’ rule, a general approach for inverse modeling problems is to sample from the posterior distribution of the uncertain model parameters given the observations. However, the large number of repetitive forward simulations required in the sampling process could pose a prohibitive computational burden. This difficulty is particularly challenging when the posterior is multimodal. We present in this paper an adaptive importance sampling algorithm to tackle thesemore » challenges. Two essential ingredients of the algorithm are: 1) a Gaussian mixture (GM) model adaptively constructed as the proposal distribution to approximate the possibly multimodal target posterior, and 2) a mixture of polynomial chaos (PC) expansions, built according to the GM proposal, as a surrogate model to alleviate the computational burden caused by computational-demanding forward model evaluations. In three illustrative examples, the proposed adaptive importance sampling algorithm demonstrates its capabilities of automatically finding a GM proposal with an appropriate number of modes for the specific problem under study, and obtaining a sample accurately and efficiently representing the posterior with limited number of forward simulations.« less

  13. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  14. A time self-adaptive multilevel algorithm for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Terracol, M.; Sagaut, P.; Basdevant, C.

    2003-01-01

    An extension of the multilevel method applied to LES proposed in Terracol et al. [J. Comput. Phys. 167 (2001) 439] is introduced here to reduce the CPU times in unsteady simulation of turbulent flows. Flow variables are decomposed into several wavenumber bands, each band being associated to a computational grid in physical space. The general framework associated to such a decomposition is presented, and a new adapted closure is proposed for the subgrid terms which appear at each filtering level, while the closure at the finest level is performed with a classical LES model. CPU time saving is obtained by the use of V-cycles, as in the multigrid terminology. The main part of the simulation is thus performed on the coarse levels, while the smallest resolved scales are kept frozen (quasi-static approximation [Comput. Methods Appl. Mech. Engrg. 159 (1998) 123]). This allows to reduce significantly the CPU times in comparison with classical LES, while the accuracy of the simulation is preserved by the use of a fine discretization level. To ensure the validity of the quasi-static approximation, a dynamic evaluation of the time during which it remains valid is performed at each level through an a priori error estimation of the small-scales time variation. This leads to a totally self-adaptive method in which both the number of levels and the integration times on each grid level are evaluated dynamically. The method is assessed on a fully unsteady time-developing compressible mixing layer at a low-Reynolds number for which a DNS has also been performed, and in the inviscid case. Finally, a plane channel flow configuration has been considered. In all cases, the results obtained are in good agreement with classical LES performed on a fine grid, with CPU time reduction factors of up to five.

  15. A Biomimetic Adaptive Algorithm and Low-Power Architecture for Implantable Neural Decoders

    PubMed Central

    Rapoport, Benjamin I.; Wattanapanitch, Woradorn; Penagos, Hector L.; Musallam, Sam; Andersen, Richard A.; Sarpeshkar, Rahul

    2010-01-01

    Algorithmically and energetically efficient computational architectures that operate in real time are essential for clinically useful neural prosthetic devices. Such devices decode raw neural data to obtain direct control signals for external devices. They can also perform data compression and vastly reduce the bandwidth and consequently power expended in wireless transmission of raw data from implantable brain-machine interfaces. We describe a biomimetic algorithm and micropower analog circuit architecture for decoding neural cell ensemble signals. The decoding algorithm implements a continuous-time artificial neural network, using a bank of adaptive linear filters with kernels that emulate synaptic dynamics. The filters transform neural signal inputs into control-parameter outputs, and can be tuned automatically in an on-line learning process. We provide experimental validation of our system using neural data from thalamic head-direction cells in an awake behaving rat. PMID:19964345

  16. A self-adaptive parameter optimization algorithm in a real-time parallel image processing system.

    PubMed

    Li, Ge; Zhang, Xuehe; Zhao, Jie; Zhang, Hongli; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    Aiming at the stalemate that precision, speed, robustness, and other parameters constrain each other in the parallel processed vision servo system, this paper proposed an adaptive load capacity balance strategy on the servo parameters optimization algorithm (ALBPO) to improve the computing precision and to achieve high detection ratio while not reducing the servo circle. We use load capacity functions (LC) to estimate the load for each processor and then make continuous self-adaptation towards a balanced status based on the fluctuated LC results; meanwhile, we pick up a proper set of target detection and location parameters according to the results of LC. Compared with current load balance algorithm, the algorithm proposed in this paper is proceeded under an unknown informed status about the maximum load and the current load of the processors, which means it has great extensibility. Simulation results showed that the ALBPO algorithm has great merits on load balance performance, realizing the optimization of QoS for each processor, fulfilling the balance requirements of servo circle, precision, and robustness of the parallel processed vision servo system. PMID:24174920

  17. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  18. An adaptive algorithm for simulation of stochastic reaction-diffusion processes

    SciTech Connect

    Ferm, Lars Hellander, Andreas Loetstedt, Per

    2010-01-20

    We propose an adaptive hybrid method suitable for stochastic simulation of diffusion dominated reaction-diffusion processes. For such systems, simulation of the diffusion requires the predominant part of the computing time. In order to reduce the computational work, the diffusion in parts of the domain is treated macroscopically, in other parts with the tau-leap method and in the remaining parts with Gillespie's stochastic simulation algorithm (SSA) as implemented in the next subvolume method (NSM). The chemical reactions are handled by SSA everywhere in the computational domain. A trajectory of the process is advanced in time by an operator splitting technique and the timesteps are chosen adaptively. The spatial adaptation is based on estimates of the errors in the tau-leap method and the macroscopic diffusion. The accuracy and efficiency of the method are demonstrated in examples from molecular biology where the domain is discretized by unstructured meshes.

  19. Cluster-based spike detection algorithm adapts to interpatient and intrapatient variation in spike morphology.

    PubMed

    Nonclercq, Antoine; Foulon, Martine; Verheulpen, Denis; De Cock, Cathy; Buzatu, Marga; Mathys, Pierre; Van Bogaert, Patrick

    2012-09-30

    Visual quantification of interictal epileptiform activity is time consuming and requires a high level of expert's vigilance. This is especially true for overnight recordings of patient suffering from epileptic encephalopathy with continuous spike and waves during slow-wave sleep (CSWS) as they can show tens of thousands of spikes. Automatic spike detection would be attractive for this condition, but available algorithms have methodological limitations related to variation in spike morphology both between patients and within a single recording. We propose a fully automated method of interictal spike detection that adapts to interpatient and intrapatient variation in spike morphology. The algorithm works in five steps. (1) Spikes are detected using parameters suitable for highly sensitive detection. (2) Detected spikes are separated into clusters. (3) The number of clusters is automatically adjusted. (4) Centroids are used as templates for more specific spike detections, therefore adapting to the types of spike morphology. (5) Detected spikes are summed. The algorithm was evaluated on EEG samples from 20 children suffering from epilepsy with CSWS. When compared to the manual scoring of 3 EEG experts (3 records), the algorithm demonstrated similar performance since sensitivity and selectivity were 0.3% higher and 0.4% lower, respectively. The algorithm showed little difference compared to the manual scoring of another expert for the spike-and-wave index evaluation in 17 additional records (the mean absolute difference was 3.8%). This algorithm is therefore efficient for the count of interictal spikes and determination of a spike-and-wave index. PMID:22850558

  20. Reciprocal Grids: A Hierarchical Algorithm for Computing Solution X-ray Scattering Curves from Supramolecular Complexes at High Resolution.

    PubMed

    Ginsburg, Avi; Ben-Nun, Tal; Asor, Roi; Shemesh, Asaf; Ringel, Israel; Raviv, Uri

    2016-08-22

    In many biochemical processes large biomolecular assemblies play important roles. X-ray scattering is a label-free bulk method that can probe the structure of large self-assembled complexes in solution. As we demonstrate in this paper, solution X-ray scattering can measure complex supramolecular assemblies at high sensitivity and resolution. At high resolution, however, data analysis of larger complexes is computationally demanding. We present an efficient method to compute the scattering curves from complex structures over a wide range of scattering angles. In our computational method, structures are defined as hierarchical trees in which repeating subunits are docked into their assembly symmetries, describing the manner subunits repeat in the structure (in other words, the locations and orientations of the repeating subunits). The amplitude of the assembly is calculated by computing the amplitudes of the basic subunits on 3D reciprocal-space grids, moving up in the hierarchy, calculating the grids of larger structures, and repeating this process for all the leaves and nodes of the tree. For very large structures, we developed a hybrid method that sums grids of smaller subunits in order to avoid numerical artifacts. We developed protocols for obtaining high-resolution solution X-ray scattering data from taxol-free microtubules at a wide range of scattering angles. We then validated our method by adequately modeling these high-resolution data. The higher speed and accuracy of our method, over existing methods, is demonstrated for smaller structures: short microtubule and tobacco mosaic virus. Our algorithm may be integrated into various structure prediction computational tools, simulations, and theoretical models, and provide means for testing their predicted structural model, by calculating the expected X-ray scattering curve and comparing with experimental data. PMID:27410762