Science.gov

Sample records for adaptively deformed mesh

  1. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  2. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  3. Output-based mesh adaptation for high order Navier-Stokes simulations on deformable domains

    NASA Astrophysics Data System (ADS)

    Kast, Steven M.; Fidkowski, Krzysztof J.

    2013-11-01

    We present an output-based mesh adaptation strategy for Navier-Stokes simulations on deforming domains. The equations are solved with an arbitrary Lagrangian-Eulerian (ALE) approach, using a discontinuous Galerkin finite-element discretization in both space and time. Discrete unsteady adjoint solutions, derived for both the state and the geometric conservation law, provide output error estimates and drive adaptation of the space-time mesh. Spatial adaptation consists of dynamic order increment or decrement on a fixed tessellation of the domain, while a combination of coarsening and refinement is used to provide an efficient time step distribution. Results from compressible Navier-Stokes simulations in both two and three dimensions demonstrate the accuracy and efficiency of the proposed approach. In particular, the method is shown to outperform other common adaptation strategies, which, while sometimes adequate for static problems, struggle in the presence of mesh motion.

  4. Unstructured mesh generation and adaptivity

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  5. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  6. Mesh deformation based on artificial neural networks

    NASA Astrophysics Data System (ADS)

    Stadler, Domen; Kosel, Franc; Čelič, Damjan; Lipej, Andrej

    2011-09-01

    In the article a new mesh deformation algorithm based on artificial neural networks is introduced. This method is a point-to-point method, meaning that it does not use connectivity information for calculation of the mesh deformation. Two already known point-to-point methods, based on interpolation techniques, are also presented. In contrast to the two known interpolation methods, the new method does not require a summation over all boundary nodes for one displacement calculation. The consequence of this fact is a shorter computational time of mesh deformation, which is proven by different deformation tests. The quality of the deformed meshes with all three deformation methods was also compared. Finally, the generated and the deformed three-dimensional meshes were used in the computational fluid dynamics numerical analysis of a Francis water turbine. A comparison of the analysis results was made to prove the applicability of the new method in every day computation.

  7. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  8. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  9. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  10. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    SciTech Connect

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally

  11. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  12. Model-Based Nonrigid Motion Analysis Using Natural Feature Adaptive Mesh

    SciTech Connect

    Zhang, Y.; Goldgof, D.B.; Sarkar, S.; Tsap, L.V.

    2000-04-25

    The success of nonrigid motion analysis using physical finite element model is dependent on the mesh that characterizes the object's geometric structure. We suggest a deformable mesh adapted to the natural features of images. The adaptive mesh requires much fewer number of nodes than the fixed mesh which was used in our previous work. We demonstrate the higher efficiency of the adaptive mesh in the context of estimating burn scar elasticity relative to normal skin elasticity using the observed 2D image sequence. Our results show that the scar assessment method based on the physical model using natural feature adaptive mesh can be applied to images which do not have artificial markers.

  13. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  14. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  15. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  16. Discrete Surface Evolution and Mesh Deformation for Aircraft Icing Applications

    NASA Technical Reports Server (NTRS)

    Thompson, David; Tong, Xiaoling; Arnoldus, Qiuhan; Collins, Eric; McLaurin, David; Luke, Edward; Bidwell, Colin S.

    2013-01-01

    Robust, automated mesh generation for problems with deforming geometries, such as ice accreting on aerodynamic surfaces, remains a challenging problem. Here we describe a technique to deform a discrete surface as it evolves due to the accretion of ice. The surface evolution algorithm is based on a smoothed, face-offsetting approach. We also describe a fast algebraic technique to propagate the computed surface deformations into the surrounding volume mesh while maintaining geometric mesh quality. Preliminary results presented here demonstrate the ecacy of the approach for a sphere with a prescribed accretion rate, a rime ice accretion, and a more complex glaze ice accretion.

  17. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  18. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  19. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  20. Automated hexahedral meshing of anatomic structures using deformable registration.

    PubMed

    Grosland, Nicole M; Bafna, Ritesh; Magnotta, Vincent A

    2009-02-01

    This work introduces a novel method of automating the process of patient-specific finite element (FE) model development using a mapped mesh technique. The objective is to map a predefined mesh (template) of high quality directly onto a new bony surface (target) definition, thereby yielding a similar mesh with minimal user interaction. To bring the template mesh into correspondence with the target surface, a deformable registration technique based on the FE method has been adopted. The procedure has been made hierarchical allowing several levels of mesh refinement to be used, thus reducing the time required to achieve a solution. Our initial efforts have focused on the phalanx bones of the human hand. Mesh quality metrics, such as element volume and distortion were evaluated. Furthermore, the distance between the target surface and the final mapped mesh were measured. The results have satisfactorily proven the applicability of the proposed method.

  1. A mesh density study for application to large deformation rolling process evaluations

    SciTech Connect

    Martin, J.A.

    1997-12-01

    When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated.

  2. An Adaptive Mesh Algorithm: Mapping the Mesh Variables

    SciTech Connect

    Scannapieco, Anthony J.

    2016-07-25

    Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.

  3. Mesh saliency with adaptive local patches

    NASA Astrophysics Data System (ADS)

    Nouri, Anass; Charrier, Christophe; Lézoray, Olivier

    2015-03-01

    3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.

  4. A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid

    2004-01-01

    Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.

  5. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  6. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  7. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  8. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  9. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1994-01-01

    In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.

  10. Details of tetrahedral anisotropic mesh adaptation

    NASA Astrophysics Data System (ADS)

    Jensen, Kristian Ejlebjerg; Gorman, Gerard

    2016-04-01

    We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.

  11. Full Core Multiphysics Simulation with Offline Mesh Deformation

    SciTech Connect

    Merzari, E.; Shemon, E. R.; Yu, Y.; Thomas, J. W.; Obabko, A.; Jain, Rajeev; Mahadevan, Vijay; Solberg, Jerome; Ferencz, R.; Whitesides, R.

    2015-12-21

    In this report, building on previous reports issued in FY13 we describe our continued efforts to integrate thermal/hydraulics, neutronics, and structural mechanics modeling codes to perform coupled analysis of a representative fast sodium-cooled reactor core. The focus of the present report is a full core simulation with off-line mesh deformation.

  12. LES on unstructured deforming meshes: Towards reciprocating IC engines

    NASA Technical Reports Server (NTRS)

    Haworth, D. C.; Jansen, K.

    1996-01-01

    A variable explicit/implicit characteristics-based advection scheme that is second-order accurate in space and time has been developed recently for unstructured deforming meshes (O'Rourke & Sahota 1996a). To explore the suitability of this methodology for Large-Eddy Simulation (LES), three subgrid-scale turbulence models have been implemented in the CHAD CFD code (O'Rourke & Sahota 1996b): a constant-coefficient Smagorinsky model, a dynamic Smagorinsky model for flows having one or more directions of statistical homogeneity, and a Lagrangian dynamic Smagorinsky model for flows having no spatial or temporal homogeneity (Meneveau et al. 1996). Computations have been made for three canonical flows, progressing towards the intended application of in-cylinder flow in a reciprocating engine. Grid sizes were selected to be comparable to the coarsest meshes used in earlier spectral LES studies. Quantitative results are reported for decaying homogeneous isotropic turbulence, and for a planar channel flow. Computations are compared to experimental measurements, to Direct-Numerical Simulation (DNS) data, and to Rapid-Distortion Theory (RDT) where appropriate. Generally satisfactory evolution of first and second moments is found on these coarse meshes; deviations are attributed to insufficient mesh resolution. Issues include mesh resolution and computational requirements for a specified level of accuracy, analytic characterization of the filtering implied by the numerical method, wall treatment, and inflow boundary conditions. To resolve these issues, finer-mesh simulations and computations of a simplified axisymmetric reciprocating piston-cylinder assembly are in progress.

  13. On Adaptive Mesh Generation in Two-Dimensions

    SciTech Connect

    D'Azevedo, E.

    1999-10-11

    This work considers the effectiveness of using anisotropic coordinate transformation in adaptive mesh generation. The anisotropic coordinate transformation is derived by interpreting the Hessian matrix of the data function as a metric tensor that measures the local approximation error. The Hessian matrix contains information about the local curvature of the surface and gives guidance in the aspect ratio and orientation for mesh generation. Since theoretically, an asymptotically optimally efficient mesh can be produced by transforming a regular mesh of optimal shape elements, it would be interesting to compare this approach with existing techniques in solution adaptive meshes. PLTMG , a general elliptic solver, is used to generate solution adapted triangular meshes for comparison. The solver has the capability of performing a posteriori error estimates in performing longest edge refinement, vertex unrefinement and mesh smoothing. Numerical experiments on three simple problems suggest the methodology employed in PLTMG is effective in generating near optimally efficient meshes.

  14. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    NASA Astrophysics Data System (ADS)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  15. Adaptive Mesh Refinement for ICF Calculations

    NASA Astrophysics Data System (ADS)

    Fyfe, David

    2005-10-01

    This paper describes our use of the package PARAMESH to create an Adaptive Mesh Refinement (AMR) version of NRL's FASTRAD3D code. PARAMESH was designed to create an MPI-based AMR code from a block structured serial code such as FASTRAD3D. FASTRAD3D is a compressible hydrodynamics code containing the physical effects relevant for the simulation of high-temperature plasmas including inertial confinement fusion (ICF) Rayleigh-Taylor unstable direct drive laser targets. These effects include inverse bremmstrahlung laser energy absorption, classical flux-limited Spitzer thermal conduction, real (table look-up) equation-of-state with either separate or identical electron and ion temperatures, multi-group variable Eddington radiation transport, and multi-group alpha particle transport and thermonuclear burn. Numerically, this physics requires an elliptic solver and a ray tracing approach on the AMR grid, which is the main subject of this paper. A sample ICF calculation will be presented. MacNeice et al., ``PARAMESH: A parallel adaptive mesh refinement community tool,'' Computer Physics Communications, 126 (2000), pp. 330-354.

  16. Current sheets, reconnection and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Marliani, Christiane

    1998-11-01

    Adaptive structured mesh refinement methods have proved to be an appropriate tool for the numerical study of a variety of problems where largely separated length scales are involved, e.g. [R. Grauer, C. Marliani, K. Germaschewski, PRL, 80, 4177, (1998)]. A typical example in plasma physics are the current sheets in magnetohydrodynamic flows. Their dynamics is investigated in the framework of incompressible MHD. We present simulations of the ideal and inviscid dynamics in two and three dimensions. In addition, we show numerical simulations for the resistive case in two dimensions. Specifically, we show simulations for the case of the doubly periodic coalescence instability. At the onset of the reconnection process the kinetic energy rises and drops rapidly and afterwards settles into an oscillatory phase. The timescale of the magnetic reconnection process is not affected by these fast events but consistent with the Sweet-Parker model of stationary reconnection. Taking into account the electron inertia terms in the generalized Ohm's law the electron skin depth is introduced as an additional parameter. The modified equations allow for magnetic reconnection in the collisionless regime. Current density and vorticity concentrate in extremely long and thin sheets. Their dynamics becomes numerically accessible by means of adaptive mesh refinement.

  17. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  18. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  19. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  20. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  1. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  2. A novel three-dimensional mesh deformation method based on sphere relaxation

    SciTech Connect

    Zhou, Xuan; Li, Shuixiang

    2015-10-01

    In our previous work (2013) [19], we developed a disk relaxation based mesh deformation method for two-dimensional mesh deformation. In this paper, the idea of the disk relaxation is extended to the sphere relaxation for three-dimensional meshes with large deformations. We develop a node based pre-displacement procedure to apply initial movements on nodes according to their layer indices. Afterwards, the nodes are moved locally by the improved sphere relaxation algorithm to transfer boundary deformations and increase the mesh quality. A three-dimensional mesh smoothing method is also adopted to prevent the occurrence of the negative volume of elements, and further improve the mesh quality. Numerical applications in three-dimension including the wing rotation, bending beam and morphing aircraft are carried out. The results demonstrate that the sphere relaxation based approach generates the deformed mesh with high quality, especially regarding complex boundaries and large deformations.

  3. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  4. SU-E-J-87: Lung Deformable Image Registration Using Surface Mesh Deformation for Dose Distribution Combination

    SciTech Connect

    Labine, A; Carrier, J; Bedwani, S; Chav, R; DeGuise, J

    2014-06-01

    Purpose: To allow a reliable deformable image registration (DIR) method for dose calculation in radiation therapy. This work proposes a performance assessment of a morphological segmentation algorithm that generates a deformation field from lung surface displacements with 4DCT datasets. Methods: From the 4DCT scans of 15 selected patients, the deep exhale phase of the breathing cycle is identified as the reference scan. Varian TPS EclipseTM is used to draw lung contours, which are given as input to the morphological segmentation algorithm. Voxelized contours are smoothed by a Gaussian filter and then transformed into a surface mesh representation. Such mesh is adapted by rigid and elastic deformations to match each subsequent lung volumes. The segmentation efficiency is assessed by comparing the segmented lung contour and the TPS contour considering two volume metrics, defined as Volumetric Overlap Error (VOE) [%] and Relative Volume Difference (RVD) [%] and three surface metrics, defined as Average Symmetric Surface Distance (ASSD) [mm], Root Mean Square Symmetric Surface Distance (RMSSD) [mm] and Maximum Symmetric Surface Distance (MSSD) [mm]. Then, the surface deformation between two breathing phases is determined by the displacement of corresponding vertices in each deformed surface. The lung surface deformation is linearly propagated in the lung volume to generate 3D deformation fields for each breathing phase. Results: The metrics were averaged over the 15 patients and calculated with the same segmentation parameters. The volume metrics obtained are a VOE of 5.2% and a RVD of 2.6%. The surface metrics computed are an ASSD of 0.5 mm, a RMSSD of 0.8 mm and a MSSD of 6.9 mm. Conclusion: This study shows that the morphological segmentation algorithm can provide an automatic method to capture an organ motion from 4DCT scans and translate it into a volume deformation grid needed by DIR method for dose distribution combination.

  5. Adaptive mesh fluid simulations on GPU

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Abel, Tom; Kaehler, Ralf

    2010-10-01

    We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.

  6. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  7. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  8. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2017-04-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  9. A mesh deformation technique based on two-step solution of the elasticity equations

    NASA Astrophysics Data System (ADS)

    Huang, Guo; Huang, Haiming; Guo, Jin

    2016-12-01

    In the computation of fluid mechanics problems with moving boundaries, including fluid-structure interaction, fluid mesh deformation is a common problem to be solved. An automatic mesh deformation technique for large deformations of the fluid mesh is presented on the basis of a pseudo-solid method in which the fluid mesh motion is governed by the equations of elasticity. A two-dimensional mathematical model of a linear elastic body is built by using the finite element method. The numerical result shows that the proposed method has a better performance in moving the fluid mesh without producing distorted elements than that of the classic one-step methods.

  10. A mesh deformation technique based on two-step solution of the elasticity equations

    NASA Astrophysics Data System (ADS)

    Huang, Guo; Huang, Haiming; Guo, Jin

    2017-04-01

    In the computation of fluid mechanics problems with moving boundaries, including fluid-structure interaction, fluid mesh deformation is a common problem to be solved. An automatic mesh deformation technique for large deformations of the fluid mesh is presented on the basis of a pseudo-solid method in which the fluid mesh motion is governed by the equations of elasticity. A two-dimensional mathematical model of a linear elastic body is built by using the finite element method. The numerical result shows that the proposed method has a better performance in moving the fluid mesh without producing distorted elements than that of the classic one-step methods.

  11. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  12. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  13. Serial and parallel dynamic adaptation of general hybrid meshes

    NASA Astrophysics Data System (ADS)

    Kavouklis, Christos

    The Navier-Stokes equations are a standard mathematical representation of viscous fluid flow. Their numerical solution in three dimensions remains a computationally intensive and challenging task, despite recent advances in computer speed and memory. A strategy to increase accuracy of Navier-Stokes simulations, while maintaining computing resources to a minimum, is local refinement of the associated computational mesh in regions of large solution gradients and coarsening in regions where the solution does not vary appreciably. In this work we consider adaptation of general hybrid meshes for Computational Fluid Dynamics (CFD) applications. Hybrid meshes are composed of four types of elements; hexahedra, prisms, pyramids and tetrahedra, and have been proven a promising technology in accurately resolving fluid flow for complex geometries. The first part of this dissertation is concerned with the design and implementation of a serial scheme for the adaptation of general three dimensional hybrid meshes. We have defined 29 refinement types, for all four kinds of elements. The core of the present adaptation scheme is an iterative algorithm that flags mesh edges for refinement, so that the adapted mesh is conformal. Of primary importance is considered the design of a suitable dynamic data structure that facilitates refinement and coarsening operations and furthermore minimizes memory requirements. A special dynamic list is defined for mesh elements, in contrast with the usual tree structures. It contains only elements of the current adaptation step and minimal information that is utilized to reconstruct parent elements when the mesh is coarsened. In the second part of this work, a new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid meshes is presented. Partitioning of a hybrid mesh reduces to partitioning of the corresponding dual graph. Communication among processors is based on the faces of the interpartition boundary. The distributed

  14. White Dwarf Mergers on Adaptive Meshes

    NASA Astrophysics Data System (ADS)

    Katz, Maximilian Peter

    The mergers of binary white dwarf systems are potential progenitors of astrophysical explosions such as Type Ia supernovae. These white dwarfs can merge either by orbital decay through the emission of gravitational waves or by direct collisions as a result of orbital perturbations. The coalescence of the stars may ignite nuclear fusion, resulting in the destruction of both stars through a thermonuclear runaway and ensuing detonation. The goal of this dissertation is to simulate binary white dwarf systems using the techniques of computational fluid dynamics and therefore to understand what numerical techniques are necessary to obtain accurate dynamical evolution of the system, as well as to learn what conditions are necessary to enable a realistic detonation. For this purpose I have used software that solves the relevant fluid equations, the Poisson equation for self-gravity, and the systems governing nuclear reactions between atomic species. These equations are modeled on a computational domain that uses the technique of adaptive mesh refinement to have the highest spatial resolution in the areas of the domain that are most sensitive to the need for accurate numerical evolution. I have identified that the most important obstacles to accurate evolution are the numerical violation of conservation of energy and angular momentum in the system, and the development of numerically seeded thermonuclear detonations that do not bear resemblance to physically correct detonations. I then developed methods for ameliorating these problems, and determined what metrics can be used for judging whether a given white dwarf merger simulation is trustworthy. This involved the development of a number of algorithmic improvements to the simulation software, which I describe. Finally, I performed high-resolution simulations of typical cases of white dwarf mergers and head-on collisions to demonstrate the impacts of these choices. The results of these simulations and the corresponding

  15. A two-dimensional adaptive mesh generation method

    NASA Astrophysics Data System (ADS)

    Altas, Irfan; Stephenson, John W.

    1991-05-01

    The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.

  16. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  17. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  18. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  19. WE-D-9A-01: A Novel Mesh-Based Deformable Surface-Contour Registration

    SciTech Connect

    Zhong, Z; Cai, Y; Guo, X; Jia, X; Chiu, T; Kearney, V; Liu, H; Jiang, L; Chen, S; Yordy, J; Nedzi, L; Mao, W

    2014-06-15

    Purpose: Initial guess is vital for 3D-2D deformable image registration (DIR) while dealing with large deformations for adaptive radiation therapy. A fast procedure has been developed to deform body surface to match 2D body contour on projections. This surface-contour DIR will provide an initial deformation for further complete 3D DIR or image reconstruction. Methods: Both planning CT images and come-beam CT (CBCT) projections are preprocessed to create 0–1 binary mask. Then the body surface and CBCT projection body contours are extracted by Canny edge detector. A finite element modeling system was developed to automatically generate adaptive meshes based on the image surface. After that, the projections of the CT surface voxels are computed and compared with corresponding 2D projection contours from CBCT scans. As a result, the displacement vector field (DVF) on mesh vertices around the surface was optimized iteratively until the shortest Euclidean distance between the pixels on the projections of the deformed CT surface and the corresponding CBCT projection contour is minimized. With the help of the tetrahedral meshes, we can smoothly diffuse the deformation from the surface into the interior of the volume. Finally, the deformed CT images are obtained by the optimal DVF applied on the original planning CT images. Results: The accuracy of the surface-contour registration is evaluated by 3D normalized cross correlation increased from 0.9176 to 0.9957 (sphere-ellipsoid phantom) and from 0.7627 to 0.7919 (H and N cancer patient data). Under the GPU-based implementation, our surface-contour-guided method on H and N cancer patient data takes 8 seconds/iteration, about 7.5 times faster than direct 3D method (60 seconds/iteration), and it needs fewer optimization iterations (30 iterations vs 50 iterations). Conclusion: The proposed surface-contour DIR method can substantially improve both the accuracy and the speed of reconstructing volumetric images, which is helpful

  20. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  1. Exploiting Adaptive Optics with Deformable Secondary Mirrors

    DTIC Science & Technology

    2007-03-08

    progress in tomographic wavefront sensing and altitude conjugated adaptive correction, and is a critical step forward for adaptive optics for future large...geostationary satellites, captured at the 6.5 m MMT telescope, using the deformable secondary adaptive optics system....new technology to the unique development of deformable secondary mirrors pioneered at the University of Arizona’s Center for Astronomical Adaptive

  2. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  3. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  4. Parallel tetrahedral mesh adaptation with dynamic load balancing

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    2000-06-28

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  5. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  6. Parallel adaptation of general three-dimensional hybrid meshes

    SciTech Connect

    Kavouklis, Christos Kallinderis, Yannis

    2010-05-01

    A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.

  7. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  8. Multigrid solution of internal flows using unstructured solution adaptive meshes

    NASA Technical Reports Server (NTRS)

    Smith, Wayne A.; Blake, Kenneth R.

    1992-01-01

    This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.

  9. Adaptive mesh refinement for shocks and material interfaces

    SciTech Connect

    Dai, William Wenlong

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  10. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  11. PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.

  12. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  13. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  14. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  15. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  16. Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan

    2007-01-01

    Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion

  17. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  18. AN ADAPTIVE PARTICLE-MESH GRAVITY SOLVER FOR ENZO

    SciTech Connect

    Passy, Jean-Claude; Bryan, Greg L.

    2014-11-01

    We describe and implement an adaptive particle-mesh algorithm to solve the Poisson equation for grid-based hydrodynamics codes with nested grids. The algorithm is implemented and extensively tested within the astrophysical code Enzo against the multigrid solver available by default. We find that while both algorithms show similar accuracy for smooth mass distributions, the adaptive particle-mesh algorithm is more accurate for the case of point masses, and is generally less noisy. We also demonstrate that the two-body problem can be solved accurately in a configuration with nested grids. In addition, we discuss the effect of subcycling, and demonstrate that evolving all the levels with the same timestep yields even greater precision.

  19. Boltzmann Solver with Adaptive Mesh in Velocity Space

    SciTech Connect

    Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.

    2011-05-20

    We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.

  20. AMR++: Object-Oriented Parallel Adaptive Mesh Refinement

    SciTech Connect

    Quinlan, D.; Philip, B.

    2000-02-02

    Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.

  1. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  2. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  3. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  4. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  5. Block-structured adaptive mesh refinement - theory, implementation and application

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  6. Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries

    SciTech Connect

    Phillip, B.

    2000-07-24

    Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.

  7. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  8. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  9. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  10. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  11. Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)

    NASA Astrophysics Data System (ADS)

    Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.

    2013-12-01

    Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity

  12. Adaptive Mesh Expansion Model (AMEM) for Liver Segmentation from CT Image

    PubMed Central

    Wang, Xuehu; Yang, Jian; Ai, Danni; Zheng, Yongchang; Tang, Songyuan; Wang, Yongtian

    2015-01-01

    This study proposes a novel adaptive mesh expansion model (AMEM) for liver segmentation from computed tomography images. The virtual deformable simplex model (DSM) is introduced to represent the mesh, in which the motion of each vertex can be easily manipulated. The balloon, edge, and gradient forces are combined with the binary image to construct the external force of the deformable model, which can rapidly drive the DSM to approach the target liver boundaries. Moreover, tangential and normal forces are combined with the gradient image to control the internal force, such that the DSM degree of smoothness can be precisely controlled. The triangular facet of the DSM is adaptively decomposed into smaller triangular components, which can significantly improve the segmentation accuracy of the irregularly sharp corners of the liver. The proposed method is evaluated on the basis of different criteria applied to 10 clinical data sets. Experiments demonstrate that the proposed AMEM algorithm is effective and robust and thus outperforms six other up-to-date algorithms. Moreover, AMEM can achieve a mean overlap error of 6.8% and a mean volume difference of 2.7%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 1.3 mm and 2.7 mm, respectively. PMID:25769030

  13. Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.

  14. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  15. 3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben

    2016-12-01

    This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.

  16. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  17. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  18. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  19. Production-quality Tools for Adaptive Mesh RefinementVisualization

    SciTech Connect

    Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes

    2007-10-25

    Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.

  20. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  1. Novel adaptation of the demodulation technology for gear damage detection to variable amplitudes of mesh harmonics

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2011-04-01

    In this paper, a novel adaptive demodulation technique including a new diagnostic feature is proposed for gear diagnosis in conditions of variable amplitudes of the mesh harmonics. This vibration technique employs the time synchronous average (TSA) of vibration signals. The new adaptive diagnostic feature is defined as the ratio of the sum of the sideband components of the envelope spectrum of a mesh harmonic to the measured power of the mesh harmonic. The proposed adaptation of the technique is justified theoretically and experimentally by the high level of the positive covariance between amplitudes of the mesh harmonics and the sidebands in conditions of variable amplitudes of the mesh harmonics. It is shown that the adaptive demodulation technique preserves effectiveness of local fault detection of gears operating in conditions of variable mesh amplitudes.

  2. Unstructured and adaptive mesh generation for high Reynolds number viscous flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1991-01-01

    A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.

  3. Adaptive FEM with coarse initial mesh guarantees optimal convergence rates for compactly perturbed elliptic problems

    NASA Astrophysics Data System (ADS)

    Bespalov, Alex; Haberl, Alexander; Praetorius, Dirk

    2017-04-01

    We prove that for compactly perturbed elliptic problems, where the corresponding bilinear form satisfies a Garding inequality, adaptive mesh-refinement is capable of overcoming the preasymptotic behavior and eventually leads to convergence with optimal algebraic rates. As an important consequence of our analysis, one does not have to deal with the a-priori assumption that the underlying meshes are sufficiently fine. Hence, the overall conclusion of our results is that adaptivity has stabilizing effects and can overcome possibly pessimistic restrictions on the meshes. In particular, our analysis covers adaptive mesh-refinement for the finite element discretization of the Helmholtz equation from where our interest originated.

  4. Real-time GPU surface curvature estimation on deforming meshes and volumetric data sets.

    PubMed

    Griffin, Wesley; Wang, Yu; Berrios, David; Olano, Marc

    2012-10-01

    Surface curvature is used in a number of areas in computer graphics, including texture synthesis and shape representation, mesh simplification, surface modeling, and nonphotorealistic line drawing. Most real-time applications must estimate curvature on a triangular mesh. This estimation has been limited to CPU algorithms, forcing object geometry to reside in main memory. However, as more computational work is done directly on the GPU, it is increasingly common for object geometry to exist only in GPU memory. Examples include vertex skinned animations and isosurfaces from GPU-based surface reconstruction algorithms. For static models, curvature can be precomputed and CPU algorithms are a reasonable choice. For deforming models where the geometry only resides on the GPU, transferring the deformed mesh back to the CPU limits performance. We introduce a GPU algorithm for estimating curvature in real time on arbitrary triangular meshes. We demonstrate our algorithm with curvature-based NPR feature lines and a curvature-based approximation for an ambient occlusion. We show curvature computation on volumetric data sets with a GPU isosurface extraction algorithm and vertex-skinned animations. We present a graphics pipeline and CUDA implementation. Our curvature estimation is up to ~18x faster than a multithreaded CPU benchmark.

  5. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  6. A mechanical model for deformable and mesh pattern wheel of lunar roving vehicle

    NASA Astrophysics Data System (ADS)

    Liang, Zhongchao; Wang, Yongfu; Chen, Gang (Sheng); Gao, Haibo

    2015-12-01

    As an indispensable tool for astronauts on lunar surface, the lunar roving vehicle (LRV) is of great significance for manned lunar exploration. An LRV moves on loose and soft lunar soil, so the mechanical property of its wheels directly affects the mobility performance. The wheels used for LRV have deformable and mesh pattern, therefore, the existing mechanical theory of vehicle wheel cannot be used directly for analyzing the property of LRV wheels. In this paper, a new mechanical model for LRV wheel is proposed. At first, a mechanical model for a rigid normal wheel is presented, which involves in multiple conventional parameters such as vertical load, tangential traction force, lateral force, and slip ratio. Secondly, six equivalent coefficients are introduced to amend the rigid normal wheel model to fit for the wheels with deformable and mesh-pattern in LRV application. Thirdly, the values of the six equivalent coefficients are identified by using experimental data obtained in an LRV's single wheel testing. Finally, the identified mechanical model for LRV's wheel with deformable and mesh pattern are further verified and validated by using additional experimental results.

  7. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  8. An adaptive mesh magneto-hydrodynamic analysis of interstellar clouds

    NASA Astrophysics Data System (ADS)

    Kominsky, Paul J.

    Interstellar clouds play a key role in many astrophysical events. The interactions of dense interstellar clouds with shock waves and interstellar wind were investigated using an adaptive three-dimensional Cartesian mesh approach to the magneto-hydrodynamic equations. The mixing of the cloud material with the post-shock material results in complex layers of current density. In both the shock and wind interactions, a tail develops similar to the tail found with comets due to the solar wind. The orientation of this tail structure changes with the direction of the magnetic field, and may be useful to observationally determining the orientation of magnetic fields in the interstellar medium. The octree data structure was analyzed in regard to parallel work units. Larger block sizes have a higher volume to surface ratio and support a higher percentage of computational cells to non-computational cells, but require more cells at the finest grid resolution. Keeping the minimum resolution of the grid fixed, and averaging over all possible grids, the analysis confirms experience that block sizes larger than 8 × 8 × 8 cells do not improve storage efficiency. A novel algorithm was developed to implement rotationally periodic boundary conditions on quadtree and octree data, structures. Astrophysical flows wit h symmetric circulation, such as accretion disks, or periodic instabilities, such supernova remnants, may be able to take advantage of such boundary conditions while maintaining the other benefits of a Cartesian grid.

  9. Parallel adaptive mesh refinement techniques for plasticity problems

    SciTech Connect

    Barry, W.J.; Jones, M.T. |; Plassmann, P.E.

    1997-12-31

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  10. Parallel adaptive mesh refinement techniques for plasticity problems

    NASA Technical Reports Server (NTRS)

    Barry, W. J.; Jones, M. T.; Plassmann, P. E.

    1997-01-01

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  11. CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM

    SciTech Connect

    Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  12. Numerical study of Taylor bubbles with adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry

    2014-11-01

    The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  13. Content-Adaptive Finite Element Mesh Generation of 3-D Complex MR Volumes for Bioelectromagnetic Problems.

    PubMed

    Lee, W; Kim, T-S; Cho, M; Lee, S

    2005-01-01

    In studying bioelectromagnetic problems, finite element method offers several advantages over other conventional methods such as boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropy. Mesh generation is the first requirement in the finite element analysis and there are many different approaches in mesh generation. However conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes, resulting in numerous elements in the smaller volume regions, thereby increasing computational load and demand. In this work, we present an improved content-adaptive mesh generation scheme that is efficient and fast along with options to change the contents of meshes. For demonstration, mesh models of the head from a volume MRI are presented in 2-D and 3-D.

  14. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  15. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  16. Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations

    DTIC Science & Technology

    1983-03-01

    grids. We use either the Coarse Mesh Approximation fethod ( Ciment , [1971]) or interpolation from a coarser grid to get the boundary values. In Berger...Problems, Math. Conp. 31 (1977), 333-390. M. Ciment , Stable Difference Schemes with Uneven Mesh Spacings, Math. Comp. 25 (1971), 219-227. H. Cramr

  17. A Mass Conservation Algorithm for Adaptive Unrefinement Meshes Used by Finite Element Methods

    DTIC Science & Technology

    2012-01-01

    dimensional mesh generation. In: Proc. 4th ACM-SIAM Symp. on Disc. Algorithms. (1993) 83–92 [9] Weatherill, N., Hassan, O., Marcum, D., Marchant, M.: Grid ...Conference on Computational Science, ICCS 2012 A Mass Conservation Algorithm For Adaptive Unrefinement Meshes Used By Finite Element Methods Hung V. Nguyen...velocity fields, and chemical distribution, as well as conserve mass, especially for water quality applications. Solution accuracy depends highly on mesh

  18. Finite Macro-Element Mesh Deformation in a Structured Multi-Block Navier-Stokes Code

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.

    2005-01-01

    A mesh deformation scheme is developed for a structured multi-block Navier-Stokes code consisting of two steps. The first step is a finite element solution of either user defined or automatically generated macro-elements. Macro-elements are hexagonal finite elements created from a subset of points from the full mesh. When assembled, the finite element system spans the complete flow domain. Macro-element moduli vary according to the distance to the nearest surface, resulting in extremely stiff elements near a moving surface and very pliable elements away from boundaries. Solution of the finite element system for the imposed boundary deflections generally produces smoothly varying nodal deflections. The manner in which distance to the nearest surface has been found to critically influence the quality of the element deformation. The second step is a transfinite interpolation which distributes the macro-element nodal deflections to the remaining fluid mesh points. The scheme is demonstrated for several two-dimensional applications.

  19. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  20. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  1. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  2. Development and Verification of Unstructured Adaptive Mesh Technique with Edge Compatibility

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Kunugi, Tomoaki; Ohshima, Hiroyuki

    In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells.

  3. Adaptive Meshing of Ship Air-Wake Flowfields

    DTIC Science & Technology

    2014-03-03

    this code are currently generated using Pointwise .[2] This code also uses a second order spatial finite-volume scheme with first order explicit...simulated with the two codes and is shown below. The surface mesh from the 3D mesh generated by Pointwise serves as the geometry for the OctFlow code. A...Geometries", AIAA- 2000-1006,2000. 2. " Pointwise ." Pointwise , Inc., http://www.pointwise.com. 3. O’Connell, M., and Karman, S., "Mesh Rupturing: A

  4. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  5. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  6. Discontinuous deformation analysis with second-order finite element meshed block

    NASA Astrophysics Data System (ADS)

    Grayeli, Roozbeh; Mortazavi, Ali

    2006-12-01

    The discontinuous deformation analysis (DDA) with second-order displacement functions was derived based on six-node triangular mesh in order to satisfy the requirement for the accurate calculations in practical applications. The matrices of equilibrium equations for the second-order DDA were given in detail for program coding. By close comparison with widely used finite element method and closed form solutions, the advantages of the modified DDA were illustrated. The program coding was carried out in C++ environment and the new code applied to three examples with known analytical solutions. A very good agreement was achieved between the analytical and numerical results produced by the modified DDA code. Copyright

  7. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  8. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  9. Calibrating nonlinear volcano deformation source parameters in FEMs: The pinned mesh perturbation method. (Invited)

    NASA Astrophysics Data System (ADS)

    Masterlark, T.; Stone, J.; Feigl, K.

    2010-12-01

    The internal structure, loading processes, and effective boundary conditions of a volcano control the deformation that we observe at the Earth’s surface. Forward models of these internal structures and processes allow us to predict the surface deformation. In practice, we are faced with the inverse situation of using surface observations (e.g., InSAR and GPS) to characterize the inaccessible internal structures and processes. Distortions of these characteristics are tied to our ability to: 1) identify and resolve the internal structure; 2) simulate the internal processes over a problem domain having this internal structure; and 3) calibrate parameters that describe these internal processes to the observed deformation. Relatively simple analytical solutions for deformation sources (such as a pressurized magma chamber) embedded in a homogeneous, elastic half-space are commonly used to simulate observed volcano deformation, because they are computationally inexpensive, and thus easily integrated into inverse analyses that seek to characterize the source position and magnitude. However, the half-space models generally do not adequately represent internal distributions of material properties and complex geometric configurations, such as topography, of volcano deformational systems. These incompatibilities are known to severely bias both source parameter estimations and forward model calculations of deformation and stress. Alternatively, a Finite Element Model (FEM) can simulate the elastic response to a pressurized magma chamber over a domain having arbitrary geometry and distribution of material properties. However, the ability to impose perturbations of the source position parameters and automatically reconstruct an acceptable mesh has been an obstacle to implementing FEM-based nonlinear inverse methods to estimate the position of a deformation source. Using InSAR-observed deflation of Okmok volcano, Alaska, during its 1997 eruption as an example, we present the

  10. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems.

    PubMed

    Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y

    2006-12-07

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  11. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems

    NASA Astrophysics Data System (ADS)

    Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.

    2006-12-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  12. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    SciTech Connect

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.

  13. Composite-Grid Techniques and Adaptive Mesh Refinement in Computational Fluid Dynamics

    DTIC Science & Technology

    1990-01-01

    the equations govern- ing the flow. The patched adaptive mesh refinement technique, devised at Stanford by Oliger, et al ., copes with these sources of...patched adaptive mesh refinement technique, devised at Stanford by Oliger et al . [OL184], copes with these sources of error efficiently by refining...differential equation, as in the numerical grid generation methods proposed by Thompson et al . [THO85], or simply a list of pairs of points in

  14. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  15. Adaptive meshing technique applied to an orthopaedic finite element contact problem.

    PubMed

    Roarty, Colleen M; Grosland, Nicole M

    2004-01-01

    Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.

  16. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  17. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric...order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing...adaptive mesh refinement, discontinuous Galerkin method, non-conforming mesh, IMEX, compressible Euler equations, atmospheric simulations 1. Introduction

  18. The whole mesh deformation model: a fast image segmentation method suitable for effective parallelization

    NASA Astrophysics Data System (ADS)

    Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José

    2013-12-01

    In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.

  19. Aeroelastic analysis of wings using the Euler equations with a deforming mesh

    NASA Technical Reports Server (NTRS)

    Robinson, Brian A.; Batina, John T.; Yang, Henry T. Y.

    1990-01-01

    Modifications to the CFL3D three dimensional unsteady Euler/Navier-Stokes code for the aeroelastic analysis of wings are described. The modifications involve including a deforming mesh capability which can move the mesh to continuously conform to the instantaneous shape of the aeroelastically deforming wing, and including the structural equations of motion for their simultaneous time-integration with the governing flow equations. Calculations were performed using the Euler equations to verify the modifications to the code and as a first step toward aeroelastic analysis using the Navier-Stokes equations. Results are presented for the NACA 0012 airfoil and a 45 deg sweptback wing to demonstrate applications of CFL3D for generalized force computations and aeroelastic analysis. Comparisons are made with published Euler results for the NACA 0012 airfoil and with experimental flutter data for the 45 deg sweptback wing to assess the accuracy of the present capability. These comparisons show good agreement and, thus, the CFL3D code may be used with confidence for aeroelastic analysis of wings.

  20. Aeroelastic analysis of wings using the Euler equations with a deforming mesh

    NASA Technical Reports Server (NTRS)

    Robinson, Brian A.; Batina, John T.; Yang, Henry T. Y.

    1990-01-01

    Modifications to the CFL3D three-dimensional unsteady Euler/Navier-Stokes code for the aeroelastic analysis of wings are described. The modifications involve including a deforming mesh capability which can move the mesh to continuously conform to the instantaneous shape of the aeroelastically deforming wing, and including the structural equations of motion for their simultaneous time-integration with the governing flow equations. Calculations were performed using the Euler equations to verify the modifications to the code and as a first-step toward aeroelastic analysis using the Navier-Stokes equations. Results are presented for the NACA 0012 airfoil and a 45 deg sweptback wing to demonstrate applications of CFL3D for generalized force computations and aeroelastic analysis. Comparisons are made with published Euler results for the NACA 0012 airfoil and with experimental flutter data for the 45 deg sweptback wing to assess the accuracy of the present capability. These comparisons show good agreement and, thus, the CFL3D code may be used with confidence for aeroelastic analysis of wings. The paper describes the modifications that were made to the code and presents results and comparisons which assess the capability.

  1. Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1989-01-01

    The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.

  2. Standard and goal-oriented adaptive mesh refinement applied to radiation transport on 2D unstructured triangular meshes

    SciTech Connect

    Wang Yaqi; Ragusa, Jean C.

    2011-02-01

    Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.

  3. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  4. Adaptive, Tactical Mesh Networking: Control Base MANET Model

    DTIC Science & Technology

    2010-09-01

    pp. 316–320 Available: IEEE Xplore , http://ieeexplore.ieee.org [Accessed: June 9, 2010]. [5] N. Sidiropoulos, “Multiuser Transmit Beamforming...Mobile Mesh Segments of TNT Testbed .......... 11 Figure 5. Infrastructure and Ad Hoc Mode of IEEE 802.11................................ 13 Figure...6. The Power Spectral Density of OFDM................................................ 14 Figure 7. A Typical IEEE 802.16 Network

  5. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Choptuik, M. W.

    An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

  6. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable

  7. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the ;exact; adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  8. A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids

    NASA Astrophysics Data System (ADS)

    Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar

    2014-11-01

    We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.

  9. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    NASA Astrophysics Data System (ADS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  10. Adaptive unstructured meshing for thermal stress analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Dechaumphai, Pramote

    1992-01-01

    An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.

  11. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  12. Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.

  13. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  14. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  15. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  16. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    SciTech Connect

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.

  17. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  18. Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations

    NASA Technical Reports Server (NTRS)

    Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.

    2006-01-01

    Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.

  19. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  20. Time-accurate anisotropic mesh adaptation for three-dimensional time-dependent problems with body-fitted moving geometries

    NASA Astrophysics Data System (ADS)

    Barral, N.; Olivier, G.; Alauzet, F.

    2017-02-01

    Anisotropic metric-based mesh adaptation has proved its efficiency to reduce the CPU time of steady and unsteady simulations while improving their accuracy. However, its extension to time-dependent problems with body-fitted moving geometries is far from straightforward. This paper establishes a well-founded framework for multiscale mesh adaptation of unsteady problems with moving boundaries. This framework is based on a novel space-time analysis of the interpolation error, within the continuous mesh theory. An optimal metric field, called ALE metric field, is derived, which takes into account the movement of the mesh during the adaptation. Based on this analysis, the global fixed-point adaptation algorithm for time-dependent simulations is extended to moving boundary problems, within the range of body-fitted moving meshes and ALE simulations. Finally, three dimensional adaptive simulations with moving boundaries are presented to validate the proposed approach.

  1. Multiphase flow modelling of volcanic ash particle settling in water using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R. G.

    2013-02-01

    Small-scale experiments of volcanic ash particle settling in water have demonstrated that ash particles can either settle slowly and individually, or rapidly and collectively as a gravitationally unstable ash-laden plume. This has important implications for the emplacement of tephra deposits on the seabed. Numerical modelling has the potential to extend the results of laboratory experiments to larger scales and explore the conditions under which plumes may form and persist, but many existing models are computationally restricted by the fixed mesh approaches that they employ. In contrast, this paper presents a new multiphase flow model that uses an adaptive unstructured mesh approach. As a simulation progresses, the mesh is optimized to focus numerical resolution in areas important to the dynamics and decrease it where it is not needed, thereby potentially reducing computational requirements. Model verification is performed using the method of manufactured solutions, which shows the correct solution convergence rates. Model validation and application considers 2-D simulations of plume formation in a water tank which replicate published laboratory experiments. The numerically predicted settling velocities for both individual particles and plumes, as well as instability behaviour, agree well with experimental data and observations. Plume settling is clearly hindered by the presence of a salinity gradient, and its influence must therefore be taken into account when considering particles in bodies of saline water. Furthermore, individual particles settle in the laminar flow regime while plume settling is shown (by plume Reynolds numbers greater than unity) to be in the turbulent flow regime, which has a significant impact on entrainment and settling rates. Mesh adaptivity maintains solution accuracy while providing a substantial reduction in computational requirements when compared to the same simulation performed using a fixed mesh, highlighting the benefits of an

  2. Adaptive Meshing of Ship Air-Wake Flowfields

    DTIC Science & Technology

    2014-10-21

    resolve gradients of the adaptation function. The third method is a meshless method that uses a physics-based force model to move nodes around to...method that uses a physics-based force model to move nodes around to resolve the geometry and flowfield. The initial phase of the research conducted...three codes all solve the unsteady Euler equations, but use different discretization strategies. The target application is an aircraft in a landing

  3. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  4. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  5. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  6. Adaptive mesh refinement strategies in isogeometric analysis— A computational comparison

    NASA Astrophysics Data System (ADS)

    Hennig, Paul; Kästner, Markus; Morgenstern, Philipp; Peterseim, Daniel

    2017-04-01

    We explain four variants of an adaptive finite element method with cubic splines and compare their performance in simple elliptic model problems. The methods in comparison are Truncated Hierarchical B-splines with two different refinement strategies, T-splines with the refinement strategy introduced by Scott et al. in 2012, and T-splines with an alternative refinement strategy introduced by some of the authors. In four examples, including singular and non-singular problems of linear elasticity and the Poisson problem, the H1-errors of the discrete solutions, the number of degrees of freedom as well as sparsity patterns and condition numbers of the discretized problem are compared.

  7. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  8. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  9. Fast animation of lightning using an adaptive mesh.

    PubMed

    Kim, Theodore; Lin, Ming C

    2007-01-01

    We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.

  10. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  11. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  12. Fabrication Methods for Adaptive Deformable Mirrors

    NASA Technical Reports Server (NTRS)

    Toda, Risaku; White, Victor E.; Manohara, Harish; Patterson, Keith D.; Yamamoto, Namiko; Gdoutos, Eleftherios; Steeves, John B.; Daraio, Chiara; Pellegrino, Sergio

    2013-01-01

    Previously, it was difficult to fabricate deformable mirrors made by piezoelectric actuators. This is because numerous actuators need to be precisely assembled to control the surface shape of the mirror. Two approaches have been developed. Both approaches begin by depositing a stack of piezoelectric films and electrodes over a silicon wafer substrate. In the first approach, the silicon wafer is removed initially by plasmabased reactive ion etching (RIE), and non-plasma dry etching with xenon difluoride (XeF2). In the second approach, the actuator film stack is immersed in a liquid such as deionized water. The adhesion between the actuator film stack and the substrate is relatively weak. Simply by seeping liquid between the film and the substrate, the actuator film stack is gently released from the substrate. The deformable mirror contains multiple piezoelectric membrane layers as well as multiple electrode layers (some are patterned and some are unpatterned). At the piezolectric layer, polyvinylidene fluoride (PVDF), or its co-polymer, poly(vinylidene fluoride trifluoroethylene P(VDF-TrFE) is used. The surface of the mirror is coated with a reflective coating. The actuator film stack is fabricated on silicon, or silicon on insulator (SOI) substrate, by repeatedly spin-coating the PVDF or P(VDFTrFE) solution and patterned metal (electrode) deposition. In the first approach, the actuator film stack is prepared on SOI substrate. Then, the thick silicon (typically 500-micron thick and called handle silicon) of the SOI wafer is etched by a deep reactive ion etching process tool (SF6-based plasma etching). This deep RIE stops at the middle SiO2 layer. The middle SiO2 layer is etched by either HF-based wet etching or dry plasma etch. The thin silicon layer (generally called a device layer) of SOI is removed by XeF2 dry etch. This XeF2 etch is very gentle and extremely selective, so the released mirror membrane is not damaged. It is possible to replace SOI with silicon

  13. Deformable mirrors for open-loop adaptive optics

    NASA Astrophysics Data System (ADS)

    Kellerer, A.; Vidal, F.; Gendron, E.; Hubert, Z.; Perret, D.; Rousset, G.

    2012-07-01

    We characterize the performance of deformable mirrors for use in open-loop regimes. This is especially relevant for Multi Object Adaptive Optics (MOAO), or for closed-loop schemes that require improved accuracies. Deformable mirrors are usually characterized by standard parameters, such as influence functions, linearity, hysteresis, etc. We show that these parameters are insufficient for characterizing open-loop performance and that a deeper analysis of the mirror's behavior is then required. The measurements on the deformable mirrors were performed in 2007 on the AO test bench of the Meudon observatory, SESAME.

  14. Overview of deformable mirror technologies for adaptive optics and astronomy

    NASA Astrophysics Data System (ADS)

    Madec, P.-Y.

    2012-07-01

    From the ardent bucklers used during the Syracuse battle to set fire to Romans’ ships to more contemporary piezoelectric deformable mirrors widely used in astronomy, from very large voice coil deformable mirrors considered in future Extremely Large Telescopes to very small and compact ones embedded in Multi Object Adaptive Optics systems, this paper aims at giving an overview of Deformable Mirror technology for Adaptive Optics and Astronomy. First the main drivers for the design of Deformable Mirrors are recalled, not only related to atmospheric aberration compensation but also to environmental conditions or mechanical constraints. Then the different technologies available today for the manufacturing of Deformable Mirrors will be described, pros and cons analyzed. A review of the Companies and Institutes with capabilities in delivering Deformable Mirrors to astronomers will be presented, as well as lessons learned from the past 25 years of technological development and operation on sky. In conclusion, perspective will be tentatively drawn for what regards the future of Deformable Mirror technology for Astronomy.

  15. Mesh adaption for efficient multiscale implementation of one-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Lignell, D. O.; Kerstein, A. R.; Sun, G.; Monson, E. I.

    2013-06-01

    One-Dimensional Turbulence (ODT) is a stochastic model for turbulent flow simulation. In an atmospheric context, it is analogous to single-column modeling (SCM) in that it lives on a 1D spatial domain, but different in that it time advances individual flow realizations rather than ensemble-averaged quantities. The lack of averaging enables a physically sound multiscale treatment, which is useful for resolving sporadic localized phenomena, as seen in stably stratified regimes, and sharp interfaces, as observed where a convective layer encounters a stable overlying zone. In such flows, the relevant scale range is so large that it is beneficial to enhance model performance by introducing an adaptive mesh. An adaptive-mesh algorithm that provides the desired performance characteristics is described and demonstrated, and its implications for the ODT advancement scheme are explained.

  16. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  17. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  18. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    SciTech Connect

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  19. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  20. ADER-WENO finite volume schemes with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.

    2013-09-01

    We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.

  1. Fluidity: a fully-unstructured adaptive mesh computational framework for geodynamics

    NASA Astrophysics Data System (ADS)

    Kramer, S. C.; Davies, D.; Wilson, C. R.

    2010-12-01

    Fluidity is a finite element, finite volume fluid dynamics model developed by the Applied Modelling and Computation Group at Imperial College London. Several features of the model make it attractive for use in geodynamics. A core finite element library enables the rapid implementation and investigation of new numerical schemes. For example, the function spaces used for each variable can be changed allowing properties of the discretisation, such as stability, conservation and balance, to be easily varied and investigated. Furthermore, unstructured, simplex meshes allow the underlying resolution to vary rapidly across the computational domain. Combined with dynamic mesh adaptivity, where the mesh is periodically optimised to the current conditions, this allows significant savings in computational cost over traditional chessboard-like structured mesh simulations [1]. In this study we extend Fluidity (using the Portable, Extensible Toolkit for Scientific Computation [PETSc, 2]) to Stokes flow problems relevant to geodynamics. However, due to the assumptions inherent in all models, it is necessary to properly verify and validate the code before applying it to any large-scale problems. In recent years this has been made easier by the publication of a series of ‘community benchmarks’ for geodynamic modelling. We discuss the use of several of these to help validate Fluidity [e.g. 3, 4]. The experimental results of Vatteville et al. [5] are then used to validate Fluidity against laboratory measurements. This test case is also used to highlight the computational advantages of using adaptive, unstructured meshes - significantly reducing the number of nodes and total CPU time required to match a fixed mesh simulation. References: 1. C. C. Pain et al. Comput. Meth. Appl. M, 190:3771-3796, 2001. doi:10.1016/S0045-7825(00)00294-2. 2. B. Satish et al. http://www.mcs.anl.gov/petsc/petsc-2/, 2001. 3. Blankenbach et al. Geophys. J. Int., 98:23-28, 1989. 4. Busse et al. Geophys

  2. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  3. Adaptive Mesh Euler Equation Computation of Vortex Breakdown in Delta Wing Flow.

    NASA Astrophysics Data System (ADS)

    Modiano, David Laurence

    A solution method for the three-dimensional Euler equations is formulated and implemented. The solver uses an unstructured mesh of tetrahedral cells and performs adaptive refinement by mesh-point embedding to increase mesh resolution in regions of interesting flow features. The fourth-difference artificial dissipation is increased to a higher order of accuracy using the method of Holmes and Connell. A new method of temporal integration is developed to accelerate the explicit computation of unsteady flows. The solver is applied to the solution of the flow around a sharp edged delta wing, with emphasis on the behavior of the leading edge vortex above the leeside of the wing at high angle of attack, under which conditions the vortex suffers from vortex breakdown. Large deviations in entropy, which indicate vortical regions of the flow, specify the region in which adaptation is performed. Adaptive flow calculations are performed at ten different angles of attack, at seven of which vortex breakdown occurs. The aerodynamic normal force coefficients show excellent agreement with wind tunnel data measured by Jarrah, which demonstrates the importance of adaptation in obtaining an accurate solution. The pitching moment coefficient and the location of vortex breakdown are compared with experimental data measured by Hummel and Srinivasan, with which fairly good agreement is seen in cases in which the location of breakdown is over the wing. A series of unsteady calculations involving a pitching delta wing were performed. The use of the acceleration technique is validated. A hysteresis in the normal force is observed, as in experiments, and a lag in the breakdown position is demonstrated. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617 -253-5668; Fax 617-253-1690.).

  4. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution

    NASA Astrophysics Data System (ADS)

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-01

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  5. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  6. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  7. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  8. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    NASA Astrophysics Data System (ADS)

    Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan

    2016-09-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.

  9. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  10. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  11. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.

    PubMed

    Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.

  12. AMRSim: an object-oriented performance simulator for parallel adaptive mesh refinement

    SciTech Connect

    Miller, B; Philip, B; Quinlan, D; Wissink, A

    2001-01-08

    Adaptive mesh refinement is complicated by both the algorithms and the dynamic nature of the computations. In parallel the complexity of getting good performance is dependent upon the architecture and the application. Most attempts to address the complexity of AMR have lead to the development of library solutions, most have developed object-oriented libraries or frameworks. All attempts to date have made numerous and sometimes conflicting assumptions which make the evaluation of performance of AMR across different applications and architectures difficult or impracticable. The evaluation of different approaches can alternatively be accomplished through simulation of the different AMR processes. In this paper we outline our research work to simulate the processing of adaptive mesh refinement grids using a distributed array class library (P++). This paper presents a combined analytic and empirical approach, since details of the algorithms can be readily predicted (separated into specific phases), while the performance associated with the dynamic behavior must be studied empirically. The result, AMRSim, provides a simple way to develop bounds on the expected performance of AMR calculations subject to constraints given by the algorithms, frameworks, and architecture.

  13. Parallel grid library with adaptive mesh refinement for development of highly scalable simulations

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2012-04-01

    As the single CPU core performance is saturating while the number of cores in the fastest supercomputers increases exponentially, the parallel performance of simulations on distributed memory machines is crucial. At the same time, utilizing efficiently the large number of available cores presents a challenge, especially in simulations with run-time adaptive mesh refinement. We have developed a generic grid library (dccrg) aimed at finite volume simulations that is easy to use and scales well up to tens of thousands of cores. The grid has several attractive features: It 1) allows an arbitrary C++ class or structure to be used as cell data; 2) provides a simple interface for adaptive mesh refinement during a simulation; 3) encapsulates the details of MPI communication when updating the data of neighboring cells between processes; and 4) provides a simple interface to run-time load balancing, e.g. domain decomposition, through the Zoltan library. Dccrg is freely available for anyone to use, study and modify under the GNU Lesser General Public License v3. We will present the implementation of dccrg, simple and advanced usage examples and scalability results on various supercomputers and problems.

  14. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  15. Numerical study of three-dimensional liquid jet breakup with adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Pavlidis, Dimitrios; Salinas, Pablo; Pain, Christopher; Matar, Omar

    2016-11-01

    Liquid jet breakup is an important fundamental multiphase flow, often found in many industrial engineering applications. The breakup process is very complex, involving jets, liquid films, ligaments, and small droplets, featuring tremendous complexity in interfacial topology and a large range of spatial scales. The objective of this study is to investigate the fluid dynamics of three-dimensional liquid jet breakup problems, such as liquid jet primary breakup and gas-sheared liquid jet breakup. An adaptive unstructured mesh modelling framework is employed here, which can modify and adapt unstructured meshes to optimally represent the underlying physics of multiphase problems and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a 'volume of fluid' type method for the interface capturing based on a compressive control volume advection method and second-order finite element methods, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of liquid jet breakup with and without ambient gas are presented to demonstrate the capability of this method.

  16. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation

    PubMed Central

    Dione, Ibrahima; Briffard, Thomas; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674

  17. Deformation field validation and inversion applied to adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Vercauteren, Tom; De Gersem, Werner; Olteanu, Luiza A. M.; Madani, Indira; Duprez, Fréderic; Berwouts, Dieter; Speleers, Bruno; De Neve, Wilfried

    2013-08-01

    Development and implementation of chronological and anti-chronological adaptive dose accumulation strategies in adaptive intensity-modulated radiation therapy (IMRT) for head-and-neck cancer. An algorithm based on Newton iterations was implemented to efficiently compute inverse deformation fields (DFs). Four verification steps were performed to ensure a valid dose propagation: intra-cell folding detection finds zero or negative Jacobian determinants in the input DF; inter-cell folding detection is implemented on the resolution of the output DF; a region growing algorithm detects undefined values in the output DF; DF domains can be composed and displayed on the CT data. In 2011, one patient with nonmetastatic head and neck cancer selected from a three phase adaptive DPBN study was used to illustrate the algorithms implemented for adaptive chronological and anti-chronological dose accumulation. The patient received three 18F-FDG-PET/CTs prior to each treatment phase and one CT after finalizing treatment. Contour propagation and DF generation between two consecutive CTs was performed in Atlas-based autosegmentation (ABAS). Deformable image registration based dose accumulations were performed on CT1 and CT4. Dose propagation was done using combinations of DFs or their inversions. We have implemented a chronological and anti-chronological dose accumulation algorithm based on DF inversion. Algorithms were designed and implemented to detect cell folding.

  18. Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

    NASA Astrophysics Data System (ADS)

    Wood, William Alfred, III

    production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

  19. Scalable and Adaptive Streaming of 3D Mesh to Heterogeneous Devices

    NASA Astrophysics Data System (ADS)

    Abderrahim, Zeineb; Bouhlel, Mohamed Salim

    2016-12-01

    This article comprises a presentation of a web platform for the diffusion and visualization of 3D compressed data on the web. Indeed, the major goal of this work resides in the proposal of the transfer adaptation of the three-dimensional data to resources (network bandwidth, the type of visualization terminals, display resolution, user's preferences...). Also, it is an attempt to provide an effective consultation adapted to the user's request (preferences, levels of the requested detail, etc.). Such a platform can adapt the levels of detail to the change in the bandwidth and the rendering time when loading the mesh at the client level. In addition, the levels of detail are adapted to the distance between the object and the camera. These features are able to minimize the latency time and to make the real time interaction possible. The experiences as well as the comparison with the existing solutions show auspicious results in terms of latency, scalability and the quality of the experience offered to the users.

  20. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

  1. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

  2. Adaptive unstructured triangular mesh generation and flow solvers for the Navier-Stokes equations at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Ashford, Gregory A.; Powell, Kenneth G.

    1995-01-01

    A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.

  3. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  4. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  5. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique.

    PubMed

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  6. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  7. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  8. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  9. Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow

    SciTech Connect

    Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2011-06-22

    In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.

  10. Damping of spurious numerical reflections off of coarse-fine adaptive mesh refinement grid boundaries

    NASA Astrophysics Data System (ADS)

    Chilton, Sven; Colella, Phillip

    2010-11-01

    Adaptive mesh refinement (AMR) is an efficient technique for solving systems of partial differential equations numerically. The underlying algorithm determines where and when a base spatial and temporal grid must be resolved further in order to achieve the desired precision and accuracy in the numerical solution. However, propagating wave solutions prove problematic for AMR. In systems with low degrees of dissipation (e.g. the Maxwell-Vlasov system) a wave traveling from a finely resolved region into a coarsely resolved region encounters a numerical impedance mismatch, resulting in spurious reflections off of the coarse-fine grid boundary. These reflected waves then become trapped inside the fine region. Here, we present a scheme for damping these spurious reflections. We demonstrate its application to the scalar wave equation and an implementation for Maxwell's Equations. We also discuss a possible extension to the Maxwell-Vlasov system.

  11. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    SciTech Connect

    R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin

    2003-10-20

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.

  12. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    NASA Astrophysics Data System (ADS)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  13. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  14. Thickness distribution of a cooling pyroclastic flow deposit on Augustine Volcano, Alaska: Optimization using InSAR, FEMs, and an adaptive mesh algorithm

    USGS Publications Warehouse

    Masterlark, Timothy; Lu, Zhong; Rykhus, Russell P.

    2006-01-01

    nterferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992–1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640°C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1×107m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500–800°C.

  15. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  16. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  17. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  18. Adaptive Mesh Refinement With Spectral Accuracy for Magnetohydrodynamics in Two Space Dimensions

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Pouquet, A.; Mininni, P.

    2006-12-01

    We examine the effect of accuracy of high-order adaptive mesh refinement (AMR) in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang vortex made up of a magnetic X-point centered on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] in the fluid context, allowing both statically refined and dynamically refined grids. Comparisons with pseudo-spectral computations are performed. Refinement and coarsening criteria are examined, and several tests are described. We show that low-order truncation--even with a comparable number of global degrees of freedom--fails to correctly model some strong (inf-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.

  19. Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.

    2014-05-01

    Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting

  20. Development of a scalable gas-dynamics solver with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Korkut, Burak

    There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be

  1. Post-traumatic deformity of the anterior frontal table managed by the placement of a titanium mesh via an endoscopic approach.

    PubMed

    Arcuri, Francesco; Baragiotta, Nicola; Poglio, Giuseppe; Benech, Arnaldo

    2012-06-01

    We describe delayed treatment of a post-traumatic fracture of the anterior table of the frontal sinus with a titanium mesh using an endoscopic approach. To our knowledge this is the first case of a delayed post-traumatic deformity of the anterior table being treated by this method.

  2. Data-Adaptive Detection of Transient Deformation in GNSS Networks

    NASA Astrophysics Data System (ADS)

    Calais, E.; Walwer, D.; Ghil, M.

    2014-12-01

    Dense Global Navigation Satellite System (GNSS) networks have recently been developed in actively deforming regions and elsewhere. Their operation is leading to a rapidly increasing amount of data, and position time series are now routinely provided by several high-quality services. These networks often capture transient-deformation features of geophysical origin that are difficult to separate from the background noise or from seasonal residuals in the time series. In addition, because of the very large number of stations now available, it has become impossible to systematically inspect each time series and visually compare them at all neighboring sites. In order to overcome these limitations, we adapt Multichannel Singular Spectrum Analysis (M-SSA), a method derived from the analysis of dynamical systems, to the spatial and temporal analysis of GNSS position time series in dense networks. We show that this data-adaptive method — previously applied to climate, bio-medical and macro-economic indicators — allows us to extract spatio-temporal features of geophysical interest from GPS time series without a priori knowledge of the system's dynamics or of the data set's noise characteristics. We illustrate our results with examples from seasonal signals in Alaska and from micro-inflation/deflation episodes at an Aleutian-arc volcano.

  3. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  4. Consistent properties reconstruction on adaptive Cartesian meshes for complex fluids computations

    SciTech Connect

    Xia, Guoping . E-mail: xiag@purdue.edu; Li, Ding; Merkle, Charles L.

    2007-07-01

    An efficient reconstruction procedure for evaluating the constitutive properties of a complex fluid from general or specialized thermodynamic databases is presented. Properties and their pertinent derivatives are evaluated by means of an adaptive Cartesian mesh in the thermodynamic plane that provides user-specified accuracy over any selected domain. The Cartesian grid produces a binary tree data structure whose search efficiency is competitive with that for an equally spaced table or with simple equations of state such as a perfect gas. Reconstruction is accomplished on a triangular subdivision of the 2D Cartesian mesh that ensures function continuity across cell boundaries in equally and unequally spaced portions of the table to C {sup 0}, C {sup 1} or C {sup 2} levels. The C {sup 0} and C {sup 1} reconstructions fit the equation of state and enthalpy relations separately, while the C {sup 2} reconstruction fits the Helmholtz or Gibbs function enabling EOS/enthalpy consistency also. All three reconstruction levels appear effective for CFD solutions obtained to date. The efficiency of the method is demonstrated through storage and data retrieval examples for air, water and carbon dioxide. The time required for property evaluations is approximately two orders of magnitude faster with the reconstruction procedure than with the complete thermodynamic equations resulting in estimated 3D CFD savings of from 30 to 60. Storage requirements are modest for today's computers, with the C {sup 1} method requiring slightly less storage than those for the C {sup 0} and C {sup 2} reconstructions when the same accuracy is specified. Sample fluid dynamic calculations based upon the procedure show that the C {sup 1} and C {sup 2} methods are approximately a factor of two slower than the C {sup 0} method but that the reconstruction procedure enables arbitrary fluid CFD calculations that are as efficient as those for a perfect gas or an incompressible fluid for all three accuracy

  5. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  6. Optimal mirror deformation for multi conjugate adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Raffetseder, S.; Ramlau, R.; Yudytskiy, M.

    2016-02-01

    Multi conjugate adaptive optics (MCAO) is a system planned for all future extremely large telescopes to compensate in real-time for the optical distortions caused by atmospheric turbulence over a wide field of view. The principles of MCAO are based on two inverse problems: a stable tomographic reconstruction of the turbulence profile followed by the optimal alignment of multiple deformable mirrors (DMs), conjugated to different altitudes in the atmosphere. We present a novel method to treat the optimal mirror deformation problem for MCAO. Contrary to the standard approach where the problem is formulated over a discrete set of optimization directions we focus on the solution of the continuous optimization problem. In the paper we study the existence and uniqueness of the solution and present a Tikhonov based regularization method. This approach gives us the flexibility to apply quadrature rules for a more sophisticated discretization scheme. Using numerical simulations in the context of the European extremely large telescope we show that our method leads to a significant improvement in the reconstruction quality over the standard approach and allows to reduce the numerical burden on the computer performing the computations.

  7. Adaptive optics ophthalmologic systems using dual deformable mirrors

    SciTech Connect

    Jones, S; Olivier, S; Chen, D; Sadda, S; Joeres, S; Zawadzki, R; Werner, J S; Miller, D

    2007-02-01

    Adaptive Optics (AO) have been increasingly combined with a variety of ophthalmic instruments over the last decade to provide cellular-level, in-vivo images of the eye. The use of MEMS deformable mirrors in these instruments has recently been demonstrated to reduce system size and cost while improving performance. However, currently available MEMS mirrors lack the required range of motion for correcting large ocular aberrations, such as defocus and astigmatism. In order to address this problem, we have developed an AO system architecture that uses two deformable mirrors, in a woofer/tweeter arrangement, with a bimorph mirror as the woofer and a MEMS mirror as the tweeter. This setup provides several advantages, including extended aberration correction range, due to the large stroke of the bimorph mirror, high order aberration correction using the MEMS mirror, and additionally, the ability to ''focus'' through the retina. This AO system architecture is currently being used in four instruments, including an Optical Coherence Tomography (OCT) system and a retinal flood-illuminated imaging system at the UC Davis Medical Center, a Scanning Laser Ophthalmoscope (SLO) at the Doheny Eye Institute, and an OCT system at Indiana University. The design, operation and evaluation of this type of AO system architecture will be presented.

  8. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  9. Methods for high-resolution anisotropic finite element modeling of the human head: automatic MR white matter anisotropy-adaptive mesh generation.

    PubMed

    Lee, Won Hee; Kim, Tae-Seong

    2012-01-01

    This study proposes an advanced finite element (FE) head modeling technique through which high-resolution FE meshes adaptive to the degree of tissue anisotropy can be generated. Our adaptive meshing scheme (called wMesh) uses MRI structural information and fractional anisotropy maps derived from diffusion tensors in the FE mesh generation process, optimally reflecting electrical properties of the human brain. We examined the characteristics of the wMeshes through various qualitative and quantitative comparisons to the conventional FE regular-sized meshes that are non-adaptive to the degree of white matter anisotropy. We investigated numerical differences in the FE forward solutions that include the electrical potential and current density generated by current sources in the brain. The quantitative difference was calculated by two statistical measures of relative difference measure (RDM) and magnification factor (MAG). The results show that the wMeshes are adaptive to the anisotropic density of the WM anisotropy, and they better reflect the density and directionality of tissue conductivity anisotropy. Our comparison results between various anisotropic regular mesh and wMesh models show that there are substantial differences in the EEG forward solutions in the brain (up to RDM=0.48 and MAG=0.63 in the electrical potential, and RDM=0.65 and MAG=0.52 in the current density). Our analysis results indicate that the wMeshes produce different forward solutions that are different from the conventional regular meshes. We present some results that the wMesh head modeling approach enhances the sensitivity and accuracy of the FE solutions at the interfaces or in the regions where the anisotropic conductivities change sharply or their directional changes are complex. The fully automatic wMesh generation technique should be useful for modeling an individual-specific and high-resolution anisotropic FE head model incorporating realistic anisotropic conductivity distributions

  10. 3D reconstruction method from biplanar radiography using non-stereocorresponding points and elastic deformable meshes.

    PubMed

    Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A

    2000-03-01

    Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae.

  11. Materials and noncoplanar mesh designs for integrated circuits with linear elastic responses to extreme mechanical deformations

    PubMed Central

    Kim, Dae-Hyeong; Song, Jizhou; Choi, Won Mook; Kim, Hoon-Sik; Kim, Rak-Hwan; Liu, Zhuangjian; Huang, Yonggang Y.; Hwang, Keh-Chih; Zhang, Yong-wei; Rogers, John A.

    2008-01-01

    Electronic systems that offer elastic mechanical responses to high-strain deformations are of growing interest because of their ability to enable new biomedical devices and other applications whose requirements are impossible to satisfy with conventional wafer-based technologies or even with those that offer simple bendability. This article introduces materials and mechanical design strategies for classes of electronic circuits that offer extremely high stretchability, enabling them to accommodate even demanding configurations such as corkscrew twists with tight pitch (e.g., 90° in ≈1 cm) and linear stretching to “rubber-band” levels of strain (e.g., up to ≈140%). The use of single crystalline silicon nanomaterials for the semiconductor provides performance in stretchable complementary metal-oxide-semiconductor (CMOS) integrated circuits approaching that of conventional devices with comparable feature sizes formed on silicon wafers. Comprehensive theoretical studies of the mechanics reveal the way in which the structural designs enable these extreme mechanical properties without fracturing the intrinsically brittle active materials or even inducing significant changes in their electrical properties. The results, as demonstrated through electrical measurements of arrays of transistors, CMOS inverters, ring oscillators, and differential amplifiers, suggest a valuable route to high-performance stretchable electronics. PMID:19015528

  12. Distributed control in adaptive optics: deformable mirror and turbulence modeling

    NASA Astrophysics Data System (ADS)

    Ellenbroek, Rogier; Verhaegen, Michel; Doelman, Niek; Hamelinck, Roger; Rosielle, Nick; Steinbuch, Maarten

    2006-06-01

    Future large optical telescopes require adaptive optics (AO) systems whose deformable mirrors (DM) have ever more degrees of freedom. This paper describes advances that are made in a project aimed to design a new AO system that is extendible to meet tomorrow's specifications. Advances on the mechanical design are reported in a companion paper [6272-75], whereas this paper discusses the controller design aspects. The numerical complexity of controller designs often used for AO scales with the fourth power in the diameter of the telescope's primary mirror. For future large telescopes this will undoubtedly become a critical aspect. This paper demonstrates the feasibility of solving this issue with a distributed controller design. A distributed framework will be introduced in which each actuator has a separate processor that can communicate with a few direct neighbors. First, the DM will be modeled and shown to be compatible with the framework. Then, adaptive turbulence models that fit the framework will be shown to adequately capture the spatio-temporal behavior of the atmospheric disturbance, constituting a first step towards a distributed optimal control. Finally, the wavefront reconstruction step is fitted into the distributed framework such that the computational complexity for each processor increases only linearly with the telescope diameter.

  13. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    SciTech Connect

    B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  14. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  15. Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    González, M.; Vaytet, N.; Commerçon, B.; Masson, J.

    2015-06-01

    Context. Radiative transfer plays a crucial role in the star formation process. Because of the high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multifrequency radiation-hydrodynamics models have started to be developed in an attempt to better account for the large variations in opacities as a function of frequency. Aims: We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Methods: Because of the prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilized bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. Results: We present a series of tests that demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a 3D proto-stellar collapse using 20 frequency groups. Differences between grey and multigroup results are briefly discussed, and the large amount of information this new method brings us is also illustrated. Conclusions: We have implemented a multigroup flux-limited diffusion algorithm in the RAMSES code. The method performed well against standard radiation-hydrodynamics tests, and was also shown to be ripe for exploitation in the computational star formation context.

  16. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  17. Output-Based Adaptive Meshing Applied to Space Launch System Booster Separation Analysis

    NASA Technical Reports Server (NTRS)

    Dalle, Derek J.; Rogers, Stuart E.

    2015-01-01

    This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid code with comparisons to Overflow viscous CFD results and a wind tunnel test performed at NASA Langley Research Center's Unitary PlanWind Tunnel. The Space Launch System (SLS) launch vehicle includes two solid-rocket boosters that burn out before the primary core stage and thus must be discarded during the ascent trajectory. The main challenges for creating an aerodynamic database for this separation event are the large number of basis variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by the booster separation motors. The solid-rocket boosters are modified from their form when used with the Space Shuttle Launch Vehicle, which has a rich flight history. However, the differences between the SLS core and the Space Shuttle External Tank result in the boosters separating with much narrower clearances, and so reducing aerodynamic uncertainty is necessary to clear the integrated system for flight. This paper discusses an approach that has been developed to analyze about 6000 wind tunnel simulations and 5000 flight vehicle simulations using Cart3D in adaptive-meshing mode. In addition, a discussion is presented of Overflow viscous CFD runs used for uncertainty quantification. Finally, the article presents lessons learned and improvements that will be implemented in future separation databases.

  18. MASS AND MAGNETIC DISTRIBUTIONS IN SELF-GRAVITATING SUPER-ALFVENIC TURBULENCE WITH ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao

    2011-04-10

    In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.

  19. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

    SciTech Connect

    Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

    2011-03-15

    The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

  20. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  1. Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics

    SciTech Connect

    Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park

    2007-04-02

    Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.

  2. Adaptive Mesh Refinement for a High-Symmetry Singular Euler Flow

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Grauer, R.

    2002-11-01

    Starting from a highly symmetric initial condition motivated by the work of Kida [J. Phys. Soc Jpn. 54, 2132 (1995)] and Boratav and Pelz [Phys. Fluids 6, 2757 (1994)], we use the technique of block-structured adaptive mesh refinement (AMR) to numerically investigate the development of a self-similar singular solution to the incompressible Euler equations. The scheme, previously used by Grauer et al [Phys. Rev. Lett. 84, 4850 (1998)], is particularly well suited to follow the development of singular structures as it allows for effective resolutions far beyond those accessible using fixed grid algorithms. A self-similar collapse is observed in the simulation, where the maximum vorticity blows up as 1/(t_crit-t). Ng and Bhattacharjee [Phys Rev E 54, 1530 (1996)] have presented a sufficient condition for a finite-time singularity in this highly symmetric flow involving the fourth-order spatial derivative of the pressure at and near the origin. We test this sufficient condition and investigate the evolution of the spatial range over which this condition holds in our numerical results. We also demonstrate numerically that this singularity is unstable because in a full simulation that does not build in the symmetries of the initial condition, small perturbations introduced by AMR lead to nonsymmetric evolution of the vortices.

  3. 3D Boltzmann Simulation of the Io's Plasma Environment with Adaptive Mesh and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Lipatov, A. S.; Combi, M. R.

    2002-12-01

    The global dynamics of the ionized and neutral components in the environment of Io plays an important role in the interaction of Jupiter's corotating magnetospheric plasma with Io [Combi et al., 2002; 1998; Kabin et al., 2001]. The stationary simulation of this problem was done in the MHD [Combi et al., 1998; Linker et al, 1998; Kabin et al., 2001] and the electrodynamic [Saur et al., 1999] approaches. In this report, we develop a method of kinetic ion-neutral simulation, which is based on a multiscale adaptive mesh, particle and algorithm refinement. This method employs the fluid description for electrons whereas for ions the drift-kinetic and particle approaches are used. This method takes into account charge-exchange and photoionization processes. The first results of such simulation of the dynamics of ions in the Io's environment are discussed in this report. ~ M R Combi et al., J. Geophys. Res., 103, 9071, 1998. M R Combi, T I Gombosi, K Kabin, Atmospheres in the Solar System: Comparative\\ Aeronomy. Geophys. Monograph Series, 130, 151, 2002. K Kabin et al., Planetary and Space Sci., 49, 337, 2001. J A Linker et al., J. Geophys. Res., 103(E9), 19867, 1998. J Saur et al., J. Geophys. Res., 104, 25105, 1999.

  4. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  5. Higher-order conservative interpolation between control-volume meshes: Application to advection and multiphase flow problems with dynamic mesh adaptivity

    NASA Astrophysics Data System (ADS)

    Adam, A.; Pavlidis, D.; Percival, J. R.; Salinas, P.; Xie, Z.; Fang, F.; Pain, C. C.; Muggeridge, A. H.; Jackson, M. D.

    2016-09-01

    A general, higher-order, conservative and bounded interpolation for the dynamic and adaptive meshing of control-volume fields dual to continuous and discontinuous finite element representations is presented. Existing techniques such as node-wise interpolation are not conservative and do not readily generalise to discontinuous fields, whilst conservative methods such as Grandy interpolation are often too diffusive. The new method uses control-volume Galerkin projection to interpolate between control-volume fields. Bounded solutions are ensured by using a post-interpolation diffusive correction. Example applications of the method to interface capturing during advection and also to the modelling of multiphase porous media flow are presented to demonstrate the generality and robustness of the approach.

  6. Analysis of adaptive mesh refinement for IMEX discontinuous Galerkin solutions of the compressible Euler equations with application to atmospheric simulations

    NASA Astrophysics Data System (ADS)

    Kopera, Michal A.; Giraldo, Francis X.

    2014-10-01

    The resolutions of interests in atmospheric simulations require prohibitively large computational resources. Adaptive mesh refinement (AMR) tries to mitigate this problem by putting high resolution in crucial areas of the domain. We investigate the performance of a tree-based AMR algorithm for the high order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing this to uniform reference simulations of two standard atmospheric test cases: density current and rising thermal bubble. The analysis shows up to 15 times speed-up of the AMR simulations with the cost of mesh adaptation below 1% of the total runtime. We pay particular attention to the implicit-explicit (IMEX) time integration methods and show that the ARK2 method is more robust with respect to dynamically adapting meshes than BDF2. Preliminary analysis of preconditioning reveals that it can be an important factor in the AMR overhead. The compiler optimizations provide significant runtime reduction and positively affect the effectiveness of AMR allowing for speed-ups greater than it would follow from the simple performance model.

  7. Numerical Modelling of Volcanic Ash Settling in Water Using Adaptive Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R.

    2011-12-01

    At the bottom of the world's oceans lies layer after layer of ash deposited from past volcanic eruptions. Correct interpretation of these layers can provide important constraints on the duration and frequency of volcanism, but requires a full understanding of the complex multi-phase settling and deposition process. Analogue experiments of tephra settling through a tank of water demonstrate that small ash particles can either settle individually, or collectively as a gravitationally unstable ash-laden plume. These plumes are generated when the concentration of particles exceeds a certain threshold such that the density of the tephra-water mixture is sufficiently large relative to the underlying particle-free water for a gravitational Rayleigh-Taylor instability to develop. These ash-laden plumes are observed to descend as a vertical density current at a velocity much greater than that of single particles, which has important implications for the emplacement of tephra deposits on the seabed. To extend the results of laboratory experiments to large scales and explore the conditions under which vertical density currents may form and persist, we have developed a multi-phase extension to Fluidity, a combined finite element / control volume CFD code that uses adaptive unstructured meshes. As a model validation, we present two- and three-dimensional simulations of tephra plume formation in a water tank that replicate laboratory experiments (Carey, 1997, doi:10.1130/0091-7613(1997)025<0839:IOCSOT>2.3.CO;2). An inflow boundary condition at the top of the domain allows particles to flux in at a constant rate of 0.472 gm-2s-1, forming a near-surface layer of tephra particles, which initially settle individually at the predicted Stokes velocity of 1.7 mms-1. As more tephra enters the water and the particle concentration increases, the layer eventually becomes unstable and plumes begin to form, descending with velocities more than ten times greater than those of individual

  8. A learning heuristic for space mapping and searching self-organizing systems using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Phillips, Carolyn L.

    2014-09-01

    In a complex self-organizing system, small changes in the interactions between the system's components can result in different emergent macrostructures or macrobehavior. In chemical engineering and material science, such spontaneously self-assembling systems, using polymers, nanoscale or colloidal-scale particles, DNA, or other precursors, are an attractive way to create materials that are precisely engineered at a fine scale. Changes to the interactions can often be described by a set of parameters. Different contiguous regions in this parameter space correspond to different ordered states. Since these ordered states are emergent, often experiment, not analysis, is necessary to create a diagram of ordered states over the parameter space. By issuing queries to points in the parameter space (e.g., performing a computational or physical experiment), ordered states can be discovered and mapped. Queries can be costly in terms of resources or time, however. In general, one would like to learn the most information using the fewest queries. Here we introduce a learning heuristic for issuing queries to map and search a two-dimensional parameter space. Using a method inspired by adaptive mesh refinement, the heuristic iteratively issues batches of queries to be executed in parallel based on past information. By adjusting the search criteria, different types of searches (for example, a uniform search, exploring boundaries, sampling all regions equally) can be flexibly implemented. We show that this method will densely search the space, while preferentially targeting certain features. Using numerical examples, including a study simulating the self-assembly of complex crystals, we show how this heuristic can discover new regions and map boundaries more accurately than a uniformly distributed set of queries.

  9. Parallel computation of three-dimensional flows using overlapping grids with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Henshaw, William D.; Schwendeman, Donald W.

    2008-08-01

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the accuracy of the numerical solutions is assessed quantitatively through an estimation of the errors from a grid convergence study. The parallel performance of the

  10. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  11. Projections of grounding line retreat in West Antarctica carried out with an adaptive mesh model

    NASA Astrophysics Data System (ADS)

    Cornford, Stephen; Payne, Antony; Martin, Daniel; Le Brocq, Anne

    2013-04-01

    Present and future sea level rise associated with mass loss from West Antarctica is typically attributed to marine glaciers retreating in response to a warming ocean. Warmer waters melt the floating ice shelves that restrain some, if not all, marine glaciers, and the glaciers themselves respond by speeding up. That leads to thinning and in turn grounding line retreat. Satellite observations indicate that Amundsen Sea Embayment and, in particular, Pine Island Glacier, are undergoing this kind of dynamic change today. Numerical models, however, struggle to reproduce the observed behavior because either high resolution or some other kind special treatment is required at the grounding line. We present 200-year projections of three major glacier systems of West Antarctica: those that drain into the Amundsen Sea , the Filchner-Ronne Ice Shelf and the Ross Ice shelf. We do so using the newly developed BISICLES ice­ sheet model, which employs adaptive ­mesh refinement to maintain sub-kilometer resolution close to the grounding line and coarser resolution elsewhere. Ice accumulation and ice­ shelf melt-rate are derived from a range of models of the Antarctic atmosphere and ocean forced by the SRES A1B and E1 scenarios. We find that a substantial proportion of the grounding line in West Antarctica retreats, however the total sea level rise is less than 50 mm by 2100, and less than 100 mm by 2200. The lion's share of the mass loss is attributed to Pine Island Glacier, while its immediate neighbor Thwaites Glacier does not retreat until the end of the simulations.

  12. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  13. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  14. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  15. Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems

    NASA Astrophysics Data System (ADS)

    Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine

    2016-12-01

    A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.

  16. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  17. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  18. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  19. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-04-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.

  20. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    NASA Astrophysics Data System (ADS)

    Lichtenberg, Tim; Schleicher, Dominik R. G.

    2015-07-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. The use of numerical approaches provides a method to tackle the weaknesses of current models and is an important tool to close gaps in poorly constrained areas such as the rapid formation of giant planets in highly evolved systems. So far, most numerical approaches make use of Lagrangian-based smoothed-particle hydrodynamics techniques or grid-based 2D axisymmetric simulations. We present a new global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code enzo. With this setup, we explore the potential impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison. Additionally, we study the effects of varying the disk radius. The physical settings involve disks with Rdisk = 10,100 and 300 AU, with a mass of Mdisk ≈ 0.05 M⊙ and a central object of subsolar mass (M⋆ = 0.646 M⊙). To validate our thermodynamical approach we include a set of simulations with a dynamically stable profile (Qinit = 3) and similar grid parameters. The development of fragmentation and the buildup of distinct clumps in the disk is strongly dependent on the chosen AMR grid settings. By combining our findings from the resolution and parameter studies we find a general lower limit criterion to be able to resolve GI induced fragmentation features and distinct clumps, which induce turbulence in the disk and seed giant planet formation. Irrespective of the physical extension of the disk, topologically disconnected clump features are only

  1. Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer

    NASA Astrophysics Data System (ADS)

    Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri

    2017-01-01

    Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self

  2. Modelling MEMS deformable mirrors for astronomical adaptive optics

    NASA Astrophysics Data System (ADS)

    Blain, Celia

    As of July 2012, 777 exoplanets have been discovered utilizing mainly indirect detection techniques. The direct imaging of exoplanets is the next goal for astronomers, because it will reveal the diversity of planets and planetary systems, and will give access to the exoplanet's chemical composition via spectroscopy. With this spectroscopic knowledge, astronomers will be able to know, if a planet is terrestrial and, possibly, even find evidence of life. With so much potential, this branch of astronomy has also captivated the general public attention. The direct imaging of exoplanets remains a challenging task, due to (i) the extremely high contrast between the parent star and the orbiting exoplanet and (ii) their small angular separation. For ground-based observatories, this task is made even more difficult, due to the presence of atmospheric turbulence. High Contrast Imaging (HCI) instruments have been designed to meet this challenge. HCI instruments are usually composed of a coronagraph coupled with the full onaxis corrective capability of an Extreme Adaptive Optics (ExAO) system. An efficient coronagraph separates the faint planet's light from the much brighter starlight, but the dynamic boiling speckles, created by the stellar image, make exoplanet detection impossible without the help of a wavefront correction device. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is a high performance HCI instrument developed at Subaru Telescope. The wavefront control system of SCExAO consists of three wavefront sensors (WFS) coupled with a 1024- actuator Micro-Electro-Mechanical-System (MEMS) deformable mirror (DM). MEMS DMs offer a large actuator density, allowing high count DMs to be deployed in small size beams. Therefore, MEMS DMs are an attractive technology for Adaptive Optics (AO) systems and are particularly well suited for HCI instruments employing ExAO technologies. SCExAO uses coherent light modulation in the focal plane introduced by the DM, for

  3. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  4. A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids

    NASA Technical Reports Server (NTRS)

    Benson, Rusty A.; Mcrae, D. S.

    1991-01-01

    An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.

  5. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    NASA Astrophysics Data System (ADS)

    Greene, Patrick; Schofield, Sam; Nourgaliev, Robert

    2016-11-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin (DG) projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. The method retains the excellent smoothing capabilities of condition number relaxation, while providing a method for clustering mesh cells near regions of interest. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  6. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  7. Adaptive mesh compression and transmission in Internet-based interactive walkthrough virtual environments

    NASA Astrophysics Data System (ADS)

    Yang, Sheng; Kuo, C.-C. Jay

    2002-07-01

    An Internet-based interactive walkthrough virtual environment is presented in this work to facilitate interactive streaming and browsing of 3D graphic models across the Internet. The models are compressed by the view-dependent progressive mesh compression algorithm to enable the decorrelation of partitions and finer granularity. Following the fundamental framework of mesh representation, an interactive protocol based on the real time streaming protocol (RTSP) is developed to enhance the interaction between the server and the client. Finally, the data of the virtual world is re-organized and transmitted according to the viewer's requests. Experimental results demonstrate that the proposed algorithm reduces the required transmission bandwidth, and provides an acceptable visual quality even at low bit rates.

  8. Comparative analysis of deformable mirrors for ocular adaptive optics.

    PubMed

    Dalimier, Eugenie; Dainty, Chris

    2005-05-30

    We have evaluated the ability of three commercially available deformable mirrors to compensate the aberrations of the eye using a model for aberrations developed by Thibos, Bradley and Hong. The mirrors evaluated were a 37 actuator membrane mirror and 19 actuator piezo mirror (OKO Technologies) and a 35 actuator bimorph mirror (AOptix Inc). For each mirror, Zernike polynomials and typical ocular aberrated wavefronts were fitted with the mirror modes measured using a Twyman-Green interferometer. The bimorph mirror showed the lowest root mean square error, although the 19 actuator piezo device showed promise if extended to more actuators. The methodology can be used to evaluate new deformable mirrors as they become available.

  9. Adaptive optics control system for segmented MEMS deformable mirrors

    NASA Astrophysics Data System (ADS)

    Kempf, Carl J.; Helmbrecht, Michael A.; Besse, Marc

    2010-02-01

    Iris AO has developed a full closed-loop control system for control of segmented MEMS deformable mirrors. It is based on a combination of matched wavefront sensing, modal wavefront estimation, and well-calibrated open-loop characteristics. This assures closed-loop operation free of problems related to co-phasing segments or undetectable waffle patterns. This controller strategy results in relatively simple on-line computations which are suitable for implementation on low cost digital signal processors. It has been successfully implemented on Iris AO's 111 actuator (37 segment) deformable mirrors used in test-beds and research systems.

  10. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.

  11. Modelling of fluid-solid interactions using an adaptive mesh fluid model coupled with a combined finite-discrete element model

    NASA Astrophysics Data System (ADS)

    Viré, Axelle; Xiang, Jiansheng; Milthaler, Frank; Farrell, Patrick Emmet; Piggott, Matthew David; Latham, John-Paul; Pavlidis, Dimitrios; Pain, Christopher Charles

    2012-12-01

    Fluid-structure interactions are modelled by coupling the finite element fluid/ocean model `Fluidity-ICOM' with a combined finite-discrete element solid model `Y3D'. Because separate meshes are used for the fluids and solids, the present method is flexible in terms of discretisation schemes used for each material. Also, it can tackle multiple solids impacting on one another, without having ill-posed problems in the resolution of the fluid's equations. Importantly, the proposed approach ensures that Newton's third law is satisfied at the discrete level. This is done by first computing the action-reaction force on a supermesh, i.e. a function superspace of the fluid and solid meshes, and then projecting it to both meshes to use it as a source term in the fluid and solid equations. This paper demonstrates the properties of spatial conservation and accuracy of the method for a sphere immersed in a fluid, with prescribed fluid and solid velocities. While spatial conservation is shown to be independent of the mesh resolutions, accuracy requires fine resolutions in both fluid and solid meshes. It is further highlighted that unstructured meshes adapted to the solid concentration field reduce the numerical errors, in comparison with uniformly structured meshes with the same number of elements. The method is verified on flow past a falling sphere. Its potential for ocean applications is further shown through the simulation of vortex-induced vibrations of two cylinders and the flow past two flexible fibres.

  12. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  13. Adaptive-mesh-refinement simulation of partial coalescence cascade of a droplet at a liquid-liquid interface

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Bolster, Diogo

    2016-11-01

    A three-dimensional (3D) adaptive mesh refinement (AMR) algorithm on structured Cartesian grids is developed, and supplemented by a mesoscopic multiphase-flow solver based on state-of-the-art lattice Boltzmann methods (LBM). Using this in-house AMR-LBM routine, we present fully 3D simulations of partial coalescence of a liquid drop with an initially flat interface at small Ohnesorge and Bond numbers. Qualitatively, our numerical simulations are in excellent agreement with experimental observations. Partial coalescence cascades are successfully observed at very small Ohnesorge numbers (Oh 10-4). The fact that the partial coalescence is absent in similar 2D simulations suggests that the Rayleigh-Plateau instability may be the principle driving mechanism responsible for this phenomenon.

  14. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    SciTech Connect

    Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  15. Eutectic pattern transition under different temperature gradients: A phase field study coupled with the parallel adaptive-mesh-refinement algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Guo, Z.; Xiong, S.-M.

    2017-03-01

    Eutectic pattern transition under an externally imposed temperature gradient was studied using the phase field method coupled with a novel parallel adaptive-mesh-refinement (Para-AMR) algorithm. Numerical tests revealed that the Para-AMR algorithm could improve the computational efficiency by two orders of magnitude and thus made it possible to perform large-scale simulations without any compromising accuracy. Results showed that the direction of the temperature gradient played a crucial role in determining the eutectic patterns during solidification, which agreed well with experimental observations. In particular, the presence of the transverse temperature gradient could tilt the eutectic patterns, and in 3D simulations, the eutectic microstructure would alter from lamellar to rod-like and/or from rod-like to dumbbell-shaped. Furthermore, under a radial temperature gradient, the eutectic would evolve from a dumbbell-shaped or clover-shaped pattern to an isolated rod-like pattern.

  16. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    SciTech Connect

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  17. Curved mesh generation and mesh refinement using Lagrangian solid mechanics

    SciTech Connect

    Persson, P.-O.; Peraire, J.

    2008-12-31

    We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.

  18. Data-Adaptive Detection of Transient Deformation in GNSS Networks

    NASA Astrophysics Data System (ADS)

    Walwer, Damian; Calais, Eric; Ghil, Michael

    2015-04-01

    Transient deformation of the Earth surface are now commonly measured by dense and continuously operating GNSS networks. Such transients are however challenging to extract from the backgroung noise inherent to GNSS time series and to unravel from other geophysical signals such as seasonal oscillations caused by mass variations of the atmosphere, ocean and the hydrological reservoirs. In addition, because of the very large number of GNSS stations now available, it has become impossible to systematically inspect each time series and visually compare them at all neighboring sites. The issue is then to efficiently comb through large amounts of GNSS data to extract signals of geophysical importance. Here we show that the Multichannel Singular Spectrum Analysis (M-SSA), a method derived from the analysis of dynamical systems, can be used to automatically extract transient deformation, seasonal oscillations, and noise present in GNSS time series. The M-SSA is a multivariate non-parametric statistical method which exploit simultaneously the spatial and temporal correlations of geophysical fields. It consists in estimating Spatio-Temporal Empirical Orthogonal functions (ST-EOFs) onto which the GNSS time series can be projected and represented. It allows for the extraction of common modes of variability such as non-linear trends and oscillations shared across time series. Contrary to other methods that first clean the data assuming some a priori stochastic structure then search for transients using a library of a priori functions, the M-SSA allows for the extraction of transients without a priori hypothesis about their spatio-temporal structure or the noise characteristics of the time series. We illustrate our results using synthetic examples and show applications examples of the M-SSA to real data in Alaska and southern California to detect seasonal signals and micro-inflation/subsidence events.

  19. Polyhedral shape model for terrain correction of gravity and gravity gradient data based on an adaptive mesh

    NASA Astrophysics Data System (ADS)

    Guo, Zhikui; Chen, Chao; Tao, Chunhui

    2016-04-01

    Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model

  20. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  1. Meshing Preprocessor for the Mesoscopic 3D Finite Element Simulation of 2D and Interlock Fabric Deformation

    NASA Astrophysics Data System (ADS)

    Wendling, A.; Daniel, J. L.; Hivet, G.; Vidal-Sallé, E.; Boisse, P.

    2015-12-01

    Numerical simulation is a powerful tool to predict the mechanical behavior and the feasibility of composite parts. Among the available numerical approaches, as far as woven reinforced composites are concerned, 3D finite element simulation at the mesoscopic scale leads to a good compromise between realism and complexity. At this scale, the fibrous reinforcement is modeled by an interlacement of yarns assumed to be homogeneous that have to be accurately represented. Among the numerous issues induced by these simulations, the first one consists in providing a representative meshed geometrical model of the unit cell at the mesoscopic scale. The second one consists in enabling a fast data input in the finite element software (contacts definition, boundary conditions, elements reorientation, etc.) so as to obtain results within reasonable time. Based on parameterized 3D CAD modeling tool of unit-cells of dry fabrics already developed, this paper presents an efficient strategy which permits an automated meshing of the models with 3D hexahedral elements and to accelerate of several orders of magnitude the simulation data input. Finally, the overall modeling strategy is illustrated by examples of finite element simulation of the mechanical behavior of fabrics.

  2. Automated registration of large deformations for adaptive radiation therapy of prostate cancer

    SciTech Connect

    Godley, Andrew; Ahunbay, Ergun; Peng Cheng; Li, X. Allen

    2009-04-15

    Available deformable registration methods are often inaccurate over large organ variation encountered, for example, in the rectum and bladder. The authors developed a novel approach to accurately and effectively register large deformations in the prostate region for adaptive radiation therapy. A software tool combining a fast symmetric demons algorithm and the use of masks was developed in C++ based on ITK libraries to register CT images acquired at planning and before treatment fractions. The deformation field determined was subsequently used to deform the delivered dose to match the anatomy of the planning CT. The large deformations involved required that the bladder and rectum volume be masked with uniform intensities of -1000 and 1000 HU, respectively, in both the planning and treatment CTs. The tool was tested for five prostate IGRT patients. The average rectum planning to treatment contour overlap improved from 67% to 93%, the lowest initial overlap is 43%. The average bladder overlap improved from 83% to 98%, with a lowest initial overlap of 60%. Registration regions were set to include a volume receiving 4% of the maximum dose. The average region was 320x210x63, taking approximately 9 min to register on a dual 2.8 GHz Linux system. The prostate and seminal vesicles were correctly placed even though they are not masked. The accumulated doses for multiple fractions with large deformation were computed and verified. The tool developed can effectively supply the previously delivered dose for adaptive planning to correct for interfractional changes.

  3. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  4. 3-D grid refinement using the University of Michigan adaptive mesh library for a pure advective test

    NASA Astrophysics Data System (ADS)

    Oehmke, R.; Vandenberg, D.; Andronova, N.; Penner, J.; Stout, Q.; Zubov, V.; Jablonowski, C.

    2008-05-01

    The numerical representation of the partial differential equations (PDE) for high resolution atmospheric dynamical and physical features requires division of the atmospheric volume into a set of 3D grids, each of which has a not quite rectangular form. Each location on the grid contains multiple data that together represent the state of Earth's atmosphere. For successful numerical integration of the PDEs the size of each grid box is used to define the Courant-Friedrichs-Levi criterion in setting the time step. 3D adaptive representations of a sphere are needed to represent the evolution of clouds. In this paper we present the University of Michigan adaptive mesh library - a library that supports the production of parallel codes with use of adaptation on a sphere. The library manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits blocks as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells — the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. Users provide data manipulation functions for performing interpolation of user data when refining blocks. We rigorously test the library using refinement of the modeled vertical transport of a tracer with prescribed atmospheric sources and sinks. It is both a 2 and a 3D test, and bridges the performance of the model's dynamics and physics needed for inclusion of cloud formation.

  5. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

    NASA Astrophysics Data System (ADS)

    Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

    2015-03-01

    Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

  6. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  7. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  8. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  9. A deformable head and neck phantom with in-vivo dosimetry for adaptive radiotherapy quality assurance

    SciTech Connect

    Graves, Yan Jiang; Smith, Arthur-Allen; Mcilvena, David; Manilay, Zherrina; Lai, Yuet Kong; Rice, Roger; Mell, Loren; Cerviño, Laura E-mail: steve.jiang@utsouthwestern.edu; Jia, Xun; Jiang, Steve B. E-mail: steve.jiang@utsouthwestern.edu

    2015-04-15

    Purpose: Patients’ interfractional anatomic changes can compromise the initial treatment plan quality. To overcome this issue, adaptive radiotherapy (ART) has been introduced. Deformable image registration (DIR) is an important tool for ART and several deformable phantoms have been built to evaluate the algorithms’ accuracy. However, there is a lack of deformable phantoms that can also provide dosimetric information to verify the accuracy of the whole ART process. The goal of this work is to design and construct a deformable head and neck (HN) ART quality assurance (QA) phantom with in vivo dosimetry. Methods: An axial slice of a HN patient is taken as a model for the phantom construction. Six anatomic materials are considered, with HU numbers similar to a real patient. A filled balloon inside the phantom tissue is inserted to simulate tumor. Deflation of the balloon simulates tumor shrinkage. Nonradiopaque surface markers, which do not influence DIR algorithms, provide the deformation ground truth. Fixed and movable holders are built in the phantom to hold a diode for dosimetric measurements. Results: The measured deformations at the surface marker positions can be compared with deformations calculated by a DIR algorithm to evaluate its accuracy. In this study, the authors selected a Demons algorithm as a DIR algorithm example for demonstration purposes. The average error magnitude is 2.1 mm. The point dose measurements from the in vivo diode dosimeters show a good agreement with the calculated doses from the treatment planning system with a maximum difference of 3.1% of prescription dose, when the treatment plans are delivered to the phantom with original or deformed geometry. Conclusions: In this study, the authors have presented the functionality of this deformable HN phantom for testing the accuracy of DIR algorithms and verifying the ART dosimetric accuracy. The authors’ experiments demonstrate the feasibility of this phantom serving as an end

  10. High-resolution adaptive optics scanning laser ophthalmoscope with multiple deformable mirrors

    DOEpatents

    Chen, Diana C.; Olivier, Scot S.; Jones; Steven M.

    2010-02-23

    An adaptive optics scanning laser ophthalmoscopes is introduced to produce non-invasive views of the human retina. The use of dual deformable mirrors improved the dynamic range for correction of the wavefront aberrations compared with the use of the MEMS mirror alone, and improved the quality of the wavefront correction compared with the use of the bimorph mirror alone. The large-stroke bimorph deformable mirror improved the capability for axial sectioning with the confocal imaging system by providing an easier way to move the focus axially through different layers of the retina.

  11. High-Resolution Adaptive Optics Scanning Laser Ophthalmoscope with Dual Deformable Mirrors

    SciTech Connect

    Chen, D C; Jones, S M; Silva, D A; Olivier, S S

    2006-08-11

    Adaptive optics scanning laser ophthalmoscope (AO SLO) has demonstrated superior optical quality of non-invasive view of the living retina, but with limited capability of aberration compensation. In this paper, we demonstrate that the use of dual deformable mirrors can effectively compensate large aberrations in the human retina. We used a bimorph mirror to correct large-stroke, low-order aberrations and a MEMS mirror to correct low-stroke, high-order aberration. The measured ocular RMS wavefront error of a test subject was 240 nm without AO compensation. We were able to reduce the RMS wavefront error to 90 nm in clinical settings using one deformable mirror for the phase compensation and further reduced the wavefront error to 48 nm using two deformable mirrors. Compared with that of a single-deformable-mirror SLO system, dual AO SLO offers much improved dynamic range and better correction of the wavefront aberrations. The use of large-stroke deformable mirrors provided the system with the capability of axial sectioning different layers of the retina. We have achieved diffraction-limited in-vivo retinal images of targeted retinal layers such as photoreceptor layer, blood vessel layer and nerve fiber layers with the combined phase compensation of the two deformable mirrors in the AO SLO.

  12. Adaptivity via mesh movement with three-dimensional block-structured grids

    SciTech Connect

    Catherall, D.

    1996-12-31

    The method described here is one in which grid nodes are redistributed so that they are attracted towards regions of high solution activity. The major difficulty in attempting this arises from the degree of grid smoothness and orthogonality required by the flow solver. These requirements are met by suitable choice of grid equations, to be satisfied by the adapted grid, and by the inclusion of certain source terms, for added control in regions where grid movement is limited by the local geometry. The method has been coded for multiblock grids, so that complex configurations may be treated. It is demonstrated here for inviscid supercritical flow with two test cases: an ONERA M6 wing with a rounded tip, and a forward-swept wing/fuselage configuration (M151).

  13. Adaptive optimal quantization for 3D mesh representation in the spherical coordinate system

    NASA Astrophysics Data System (ADS)

    Ahn, Jeong-Hwan; Ho, Yo-Sung

    1998-12-01

    In recent days, applications using 3D models are increasing. Since the 3D model contains a huge amount of information, compression of the 3D model data is necessary for efficient storage or transmission. In this paper, we propose an adaptive encoding scheme to compress the geometry information of the 3D model. Using the Levinson-Durbin algorithm, the encoder first predicts vertex positions along a vertex spanning tree. After each prediction error is normalized, the prediction error vector of each vertex point is represented in the spherical coordinate system (r,(theta) ,(phi) ). Each r is then quantizes by an optimal uniform quantizer. A pair of each ((theta) ,(phi) ) is also successively encoded by partitioning the surface of the sphere according to the quantized value of r. The proposed scheme demonstrates improved coding efficiency by exploiting the statistical properties of r and ((theta) ,(phi) ).

  14. Temperature structure of the intracluster medium from smoothed-particle hydrodynamics and adaptive-mesh refinement simulations

    SciTech Connect

    Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea

    2014-08-20

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  15. Spherical mesh adaptive direct search for separating quasi-uncorrelated sources by range-based independent component analysis.

    PubMed

    Selvan, S Easter; Borckmans, Pierre B; Chattopadhyay, A; Absil, P-A

    2013-09-01

    It is seemingly paradoxical to the classical definition of the independent component analysis (ICA), that in reality, the true sources are often not strictly uncorrelated. With this in mind, this letter concerns a framework to extract quasi-uncorrelated sources with finite supports by optimizing a range-based contrast function under unit-norm constraints (to handle the inherent scaling indeterminacy of ICA) but without orthogonality constraints. Albeit the appealing contrast properties of the range-based function (e.g., the absence of mixing local optima), the function is not differentiable everywhere. Unfortunately, there is a dearth of literature on derivative-free optimizers that effectively handle such a nonsmooth yet promising contrast function. This is the compelling reason for the design of a nonsmooth optimization algorithm on a manifold of matrices having unit-norm columns with the following objectives: to ascertain convergence to a Clarke stationary point of the contrast function and adhere to the necessary unit-norm constraints more naturally. The proposed nonsmooth optimization algorithm crucially relies on the design and analysis of an extension of the mesh adaptive direct search (MADS) method to handle locally Lipschitz objective functions defined on the sphere. The applicability of the algorithm in the ICA domain is demonstrated with simulations involving natural, face, aerial, and texture images.

  16. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  17. A Block-Structured Adaptive Mesh Refinement Technique with a Finite-Difference-Based Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2013-11-01

    A novel adaptive mesh refinement (AMR) algorithm for the numerical solution of fluid flow problems is presented in this study. The proposed AMR algorithm can be used to solve partial differential equations including, but not limited to, the Navier-Stokes equations using an AMR technique. Here, the lattice Boltzmann method (LBM) is employed as a substitute of the nearly incompressible Navier-Stokes equations. Besides its simplicity, the proposed AMR algorithm is straightforward and yet efficient. The idea is to remove the need for a tree-type data structure by using the pointer attributes in a unique way, along with an appropriate adjustment of the child block's IDs, to determine the neighbors of a certain block. Thanks to the unique way of invoking pointers, there is no need to construct a quad-tree (in 2D) or oct-tree (in 3D) data structure for maintaining the connectivity data between different blocks. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with a clean and efficient algorithm that is easier to implement and use on parallel machines. Several benchmark studies are carried out to assess the accuracy and efficiency of the proposed AMR-LBM, including lid-driven cavity flow, vortex shedding past a square cylinder, and Kelvin-Helmholtz instability for single-phase and multiphase fluids.

  18. The 2D and 3D hypersonic flows with unstructured meshes

    NASA Technical Reports Server (NTRS)

    Thareja, Rajiv

    1993-01-01

    Viewgraphs on 2D and 3D hypersonic flows with unstructured meshes are presented. Topics covered include: mesh generation, mesh refinement, shock-shock interaction, velocity contours, mesh movement, vehicle bottom surface, and adapted meshes.

  19. Incidence of Deformation and Fracture of Twisted File Adaptive Instruments after Repeated Clinical Use

    PubMed Central

    Gambarini, Gianluca; Piasecki, Lucila; Miccoli, Gabriele; Di Giorgio, Gianni; Carneiro, Everdan; Al-Sudani, Dina; Testarelli, Luca

    2016-01-01

    ABSTRACT Objectives The aim of the present study was to investigate the incidence of deformation and fracture of twisted file adaptive nickel-titanium instruments after repeated clinical use and to identify and check whether the three instruments within the small/medium sequence showed similar or different visible signs of metal fatigue. Material and Methods One-hundred twenty twisted file adaptive (TFA) packs were collected after clinically used to prepare three molars and were inspected for deformations and fracture. Results The overall incidence of deformation was 22.2%, which was not evenly distributed within the instruments: 15% for small/medium (SM)1 (n = 18), 38.33% for SM2 (n = 46) and 13.33% for the SM3 instruments (n = 16). The defect rate of SM2 instruments was statistically higher than the other two (P < 0.001). The fracture rate was 0.83% (n = 3), being two SM2 instruments and one SM3. Conclusions It was observed a very low defect rate after clinical use of twisted file adaptive rotary instruments. The untwisting of flutes was significantly more frequent than fracture, which might act as prevention for breakage. The results highlight the fact that clinicians should be aware that instruments within a sequence might be differently subjected to intracanal stress. PMID:28154749

  20. Deformable image registration of CT and truncated cone-beam CT for adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Zhen, Xin; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2013-11-01

    Truncation of a cone-beam computed tomography (CBCT) image, mainly caused by the limited field of view (FOV) of CBCT imaging, poses challenges to the problem of deformable image registration (DIR) between computed tomography (CT) and CBCT images in adaptive radiation therapy (ART). The missing information outside the CBCT FOV usually causes incorrect deformations when a conventional DIR algorithm is utilized, which may introduce significant errors in subsequent operations such as dose calculation. In this paper, based on the observation that the missing information in the CBCT image domain does exist in the projection image domain, we propose to solve this problem by developing a hybrid deformation/reconstruction algorithm. As opposed to deforming the CT image to match the truncated CBCT image, the CT image is deformed such that its projections match all the corresponding projection images for the CBCT image. An iterative forward-backward projection algorithm is developed. Six head-and-neck cancer patient cases are used to evaluate our algorithm, five with simulated truncation and one with real truncation. It is found that our method can accurately register the CT image to the truncated CBCT image and is robust against image truncation when the portion of the truncated image is less than 40% of the total image. Part of this work was presented at the 54th AAPM Annual Meeting (Charlotte, NC, USA, 29 July-2 August 2012).

  1. Deformable Image Registration of CT and Truncated Cone-beam CT for Adaptive Radiation Therapy*

    PubMed Central

    Zhen, Xin; Yan, Hao; Zhou, Linghong; Jia, Xun; Jiang, Steve B.

    2013-01-01

    Truncation of a cone-beam computed tomography (CBCT) image, mainly caused by the limited field of view (FOV) of CBCT imaging, poses challenges to the problem of deformable image registration (DIR) between CT and CBCT images in adaptive radiation therapy (ART). The missing information outside the CBCT FOV usually causes incorrect deformations when a conventional DIR algorithm is utilized, which may introduce significant errors in subsequent operations such as dose calculation. In this paper, based on the observation that the missing information in the CBCT image domain does exist in the projection image domain, we propose to solve this problem by developing a hybrid deformation/reconstruction algorithm. As opposed to deforming the CT image to match the truncated CBCT image, the CT image is deformed such that its projections match all the corresponding projection images for the CBCT image. An iterative forward-backward projection algorithm is developed. Six head-and-neck cancer patient cases are used to evaluate our algorithm, five with simulated truncation and one with real truncation. It is found that our method can accurately register the CT image to the truncated CBCT image and is robust against image truncation when the portion of the truncated image is less than 40% of the total image. PMID:24169817

  2. High-resolution adaptive optics scanning laser ophthalmoscope with dual deformable mirrors for large aberration correction

    SciTech Connect

    Chen, D; Jones, S M; Silva, D A; Olivier, S S

    2007-01-25

    Scanning laser ophthalmoscopes with adaptive optics (AOSLO) have been shown previously to provide a noninvasive, cellular-scale view of the living human retina. However, the clinical utility of these systems has been limited by the available deformable mirror technology. In this paper, we demonstrate that the use of dual deformable mirrors can effectively compensate large aberrations in the human retina, making the AOSLO system a viable, non-invasive, high-resolution imaging tool for clinical diagnostics. We used a bimorph deformable mirror to correct low-order aberrations with relatively large amplitudes. The bimorph mirror is manufactured by Aoptix, Inc. with 37 elements and 18 {micro}m stroke in a 10 mm aperture. We used a MEMS deformable mirror to correct high-order aberrations with lower amplitudes. The MEMS mirror is manufactured by Boston Micromachine, Inc with 144 elements and 1.5 {micro}m stroke in a 3 mm aperture. We have achieved near diffraction-limited retina images using the dual deformable mirrors to correct large aberrations up to {+-} 3D of defocus and {+-} 3D of cylindrical aberrations with test subjects. This increases the range of spectacle corrections by the AO systems by a factor of 10, which is crucial for use in the clinical environment. This ability for large phase compensation can eliminate accurate refractive error fitting for the patients, which greatly improves the system ease of use and efficiency in the clinical environment.

  3. Three-dimensional modeling of a thermal dendrite using the phase field method with automatic anisotropic and unstructured adaptive finite element meshing

    NASA Astrophysics Data System (ADS)

    Sarkis, C.; Silva, L.; Gandin, Ch-A.; Plapp, M.

    2016-03-01

    Dendritic growth is computed with automatic adaptation of an anisotropic and unstructured finite element mesh. The energy conservation equation is formulated for solid and liquid phases considering an interface balance that includes the Gibbs-Thomson effect. An equation for a diffuse interface is also developed by considering a phase field function with constant negative value in the liquid and constant positive value in the solid. Unknowns are the phase field function and a dimensionless temperature, as proposed by [1]. Linear finite element interpolation is used for both variables, and discretization stabilization techniques ensure convergence towards a correct non-oscillating solution. In order to perform quantitative computations of dendritic growth on a large domain, two additional numerical ingredients are necessary: automatic anisotropic unstructured adaptive meshing [2,[3] and parallel implementations [4], both made available with the numerical platform used (CimLib) based on C++ developments. Mesh adaptation is found to greatly reduce the number of degrees of freedom. Results of phase field simulations for dendritic solidification of a pure material in two and three dimensions are shown and compared with reference work [1]. Discussion on algorithm details and the CPU time will be outlined.

  4. Characterization of the non-uniqueness of used nuclear fuel burnup signatures through a Mesh-Adaptive Direct Search

    NASA Astrophysics Data System (ADS)

    Skutnik, Steven E.; Davis, David R.

    2016-05-01

    The use of passive gamma and neutron signatures from fission indicators is a common means of estimating used fuel burnup, enrichment, and cooling time. However, while characteristic fission product signatures such as 134Cs, 137Cs, 154Eu, and others are generally reliable estimators for used fuel burnup within the context where the assembly initial enrichment and the discharge time are known, in the absence of initial enrichment and/or cooling time information (such as when applying NDA measurements in a safeguards/verification context), these fission product indicators no longer yield a unique solution for assembly enrichment, burnup, and cooling time after discharge. Through the use of a new Mesh-Adaptive Direct Search (MADS) algorithm, it is possible to directly probe the shape of this "degeneracy space" characteristic of individual nuclides (and combinations thereof), both as a function of constrained parameters (such as the assembly irradiation history) and unconstrained parameters (e.g., the cooling time before measurement and the measurement precision for particular indicator nuclides). In doing so, this affords the identification of potential means of narrowing the uncertainty space of potential assembly enrichment, burnup, and cooling time combinations, thereby bounding estimates of assembly plutonium content. In particular, combinations of gamma-emitting nuclides with distinct half-lives (e.g., 134Cs with 137Cs and 154Eu) in conjunction with gross neutron counting (via 244Cm) are able to reasonably constrain the degeneracy space of possible solutions to a space small enough to perform useful discrimination and verification of fuel assemblies based on their irradiation history.

  5. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  6. Adaptive mesh refinement for singular structures in incompressible MHD and compressible Hall-MHD with electron and ion inertia

    NASA Astrophysics Data System (ADS)

    Grauer, R.; Germaschewski, K.

    The goal of this presentation is threefold. First, the role of singular structures like shocks, vortex tubes and current sheets for understanding intermittency in small scale turbulence is demonstrated. Secondly, in order to investigate the time evolution of singular structures, effective numerical techniques have to be applied, like block structured adaptive mesh refinement combined with recent advances in treating hyperbolic equations. And thirdly, the developed numerical techniques can perfectly be applied to the question of fast reconnection demonstrated by the example of compressible Hall-MHD including electron and ion inertia. 1 Why is it worth studying singular structures? The motivation for studying singular structures has several sources. In turbulent fluid and plasma flows the formation of nearly singular structures like shocks, vortex tubes or current sheets provide an effective mechanism to transport energy from large to small scales. In the last years it has become clear that the nature of the singular structures is a key feature of small scale intermittency. In a phenomenological way this is established in She-Leveque like models (She and Leveque, 1994; Grauer, Krug and Marliani, 1994; Politano and Pouquet, 1995; M¨uller and Biskamp, 2000), which are able to describe some of the scaling properties of high order structure functions. An additional source which highlights the importance of singular structures originates from studies of a toy model of turbulence, the so-called Burgers turbulence. The very left tail of the probability distribution of velocity increments can be calculated using the instanton approach (Balkovsky, Falkovich, Kolokolov and Lebedev, 1997). Here it is interesting to note that the main contribution in the relevant path integral stems from the the singular structures which are shocks in the burgers turbulence. From a mathematical point of view the question whether

  7. Adaptive optics OCT using 1060nm swept source and dual deformable lenses for human retinal imaging

    NASA Astrophysics Data System (ADS)

    Jian, Yifan; Lee, Sujin; Cua, Michelle; Miao, Dongkai; Bonora, Stefano; Zawadzki, Robert J.; Sarunic, Marinko V.

    2016-03-01

    Adaptive optics concepts have been applied to the advancement of biological imaging and microscopy. In particular, AO has also been very successfully applied to cellular resolution imaging of the retina, enabling visualization of the characteristic mosaic patterns of the outer retinal layers using flood illumination fundus photography, Scanning Laser Ophthalmoscopy (SLO), and Optical Coherence Tomography (OCT). Despite the high quality of the in vivo images, there has been a limited uptake of AO imaging into the clinical environment. The high resolution afforded by AO comes at the price of limited field of view and specialized equipment. The implementation of a typical adaptive optics imaging system results in a relatively large and complex optical setup. The wavefront measurement is commonly performed using a Hartmann-Shack Wavefront Sensor (HS-WFS) placed at an image plane that is optically conjugated to the eye's pupil. The deformable mirror is also placed at a conjugate plane, relaying the wavefront corrections to the pupil. Due to the sensitivity of the HS-WFS to back-reflections, the imaging system is commonly constructed from spherical mirrors. In this project, we present a novel adaptive optics OCT retinal imaging system with significant potential to overcome many of the barriers to integration with a clinical environment. We describe in detail the implementation of a compact lens based wavefront sensorless adaptive optics (WSAO) 1060nm swept source OCT human retinal imaging system with dual deformable lenses, and present retinal images acquired in vivo from research volunteers.

  8. A new Control Volume Finite Element Method with Discontinuous Pressure Representation for Multi-phase Flow with Implicit Adaptive time Integration and Dynamic Unstructured mesh Optimization

    NASA Astrophysics Data System (ADS)

    Salinas, Pablo; Pavlidis, Dimitrios; Percival, James; Adam, Alexander; Xie, Zhihua; Pain, Christopher; Jackson, Matthew

    2015-11-01

    We present a new, high-order, control-volume-finite-element (CVFE) method with discontinuous representation for pressure and velocity to simulate multiphase flow in heterogeneous porous media. Time is discretized using an adaptive, fully implicit method. Heterogeneous geologic features are represented as volumes bounded by surfaces. Our approach conserves mass and does not require the use of CVs that span domain boundaries. Computational efficiency is increased by use of dynamic mesh optimization. We demonstrate that the approach, amongst other features, accurately preserves sharp saturation changes associated with high aspect ratio geologic domains, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at lower cost than an equivalent fine, fixed mesh and conventional CVFE methods. The use of implicit time integration allows the method to efficiently converge using highly anisotropic meshes without having to reduce the time-step. The work is significant for two key reasons. First, it resolves a long-standing problem associated with the use of classical CVFE methods. Second, it reduces computational cost/increases solution accuracy through the use of dynamic mesh optimization and time-stepping with large Courant number. Funding for Dr P. Salinas from ExxonMobil is gratefully acknowledged.

  9. Adaptive optics vision simulation and perceptual learning system based on a 35-element bimorph deformable mirror.

    PubMed

    Dai, Yun; Zhao, Lina; Xiao, Fei; Zhao, Haoxin; Bao, Hua; Zhou, Hong; Zhou, Yifeng; Zhang, Yudong

    2015-02-10

    An adaptive optics visual simulation combined with a perceptual learning (PL) system based on a 35-element bimorph deformable mirror (DM) was established. The larger stroke and smaller size of the bimorph DM made the system have larger aberration correction or superposition ability and be more compact. By simply modifying the control matrix or the reference matrix, select correction or superposition of aberrations was realized in real time similar to a conventional adaptive optics closed-loop correction. PL function was first integrated in addition to conventional adaptive optics visual simulation. PL training undertaken with high-order aberrations correction obviously improved the visual function of adult anisometropic amblyopia. The preliminary application of high-order aberrations correction with PL training on amblyopia treatment was being validated with a large scale population, which might have great potential in amblyopia treatment and visual performance maintenance.

  10. Extreme Adaptive Optics Testbed: Performance and Characterization of a 1024 Deformable Mirror

    SciTech Connect

    Evans, J W; Morzinski, K; Severson, S; Poyneer, L; Macintosh, B; Dillon, D; REza, L; Gavel, D; Palmer, D

    2005-10-30

    We have demonstrated that a microelectrical mechanical systems (MEMS) deformable mirror can be flattened to < 1 nm RMS within controllable spatial frequencies over a 9.2-mm aperture making it a viable option for high-contrast adaptive optics systems (also known as Extreme Adaptive Optics). The Extreme Adaptive Optics Testbed at UC Santa Cruz is being used to investigate and develop technologies for high-contrast imaging, especially wavefront control. A phase shifting diffraction interferometer (PSDI) measures wavefront errors with sub-nm precision and accuracy for metrology and wavefront control. Consistent flattening, required testing and characterization of the individual actuator response, including the effects of dead and low-response actuators. Stability and repeatability of the MEMS devices was also tested. An error budget for MEMS closed loop performance will summarize MEMS characterization.

  11. WE-G-BRF-01: Adaptation to Intrafraction Tumor Deformation During Intensity-Modulated Radiotherapy: First Proof-Of-Principle Demonstration

    SciTech Connect

    Ge, Y; OBrien, R; Shieh, C; Booth, J; Keall, P

    2014-06-15

    Purpose: Intrafraction tumor deformation limits targeting accuracy in radiotherapy and cannot be adapted to by current motion management techniques. This study simulated intrafractional treatment adaptation to tumor deformations using a dynamic Multi-Leaf Collimator (DMLC) tracking system during Intensity-modulated radiation therapy (IMRT) treatment for the first time. Methods: The DMLC tracking system was developed to adapt to the intrafraction tumor deformation by warping the planned beam aperture guided by the calculated deformation vector field (DVF) obtained from deformable image registration (DIR) at the time of treatment delivery. Seven single phantom deformation images up to 10.4 mm deformation and eight tumor system phantom deformation images up to 21.5 mm deformation were acquired and used in tracking simulation. The intrafraction adaptation was simulated at the DMLC tracking software platform, which was able to communicate with the image registration software, reshape the instantaneous IMRT field aperture and log the delivered MLC fields.The deformation adaptation accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the reference aperture. The incremental deformations were arbitrarily determined to take place equally over the delivery interval. The geometric target coverage of delivery with deformation adaptation was compared against the delivery without adaptation. Results: Intrafraction deformation adaptation during dynamic IMRT plan delivery was simulated for single and system deformable phantoms. For the two particular delivery situations, over the treatment course, deformation adaptation improved the target coverage by 89% for single target deformation and 79% for tumor system deformation compared with no-tracking delivery. Conclusion: This work demonstrated the principle of real-time tumor deformation tracking using a DMLC. This is the first step towards the development of an

  12. Deformation measurement using digital image correlation by adaptively adjusting the parameters

    NASA Astrophysics Data System (ADS)

    Zhao, Jian

    2016-12-01

    As a contactless full-field displacement and strain measurement technique, two-dimensional digital image correlation (DIC) has been increasingly employed to reconstruct in-plane deformation in the field of experimental mechanics. In practical application, it has been demonstrated that the selection of subset size and search zone size exerts a critical influence on measurement results of DIC, especially when decorrelation occurs between the reference image and the deformed image due to large deformation over the search zone involved. Correlation coefficient is an important parameter in DIC, and it also makes the most direct connection between subset size and search zone. A self-adaptive correlation parameter adjustment method based on correlation coefficient threshold to realize measurement efficiently by adjusting the size of the subset and search zone in a self-adaptive approach is proposed. The feasibility and effectiveness of the proposed method are verified through a set of experiments, which indicates that the presented algorithm is able to significantly reduce the cumbersome trial calculation as compared with the traditional DIC, in which the initial correlation parameters needed to be manually selected in advance based on practical experience.

  13. Adaptive optical beam shaping for compensating projection-induced focus deformation

    NASA Astrophysics Data System (ADS)

    Pütsch, Oliver; Stollenwerk, Jochen; Loosen, Peter

    2016-02-01

    Scanner-based applications are already widely used for the processing of surfaces, as they allow for highly dynamic deflection of the laser beam. Particularly, the processing of three-dimensional surfaces with laser radiation initiates the development of highly innovative manufacturing techniques. Unfortunately, the focused laser beam suffers from deformation caused by the involved projection mechanisms. The degree of deformation is field variant and depends on both the surface geometry and the working position of the laser beam. Depending on the process sensitivity, the deformation affects the process quality, which motivates a method of compensation. Current approaches are based on a local adaption of the laser power to maintain constant intensity within the interaction zone. For advanced manufacturing, this approach is insufficient, as the residual deformation of the initial circular laser spot is not taken into account. In this paper, an alternative approach is discussed. Additional beam-shaping devices are integrated between the laser source and the scanner, and allow for an in situ compensation to ensure a field-invariant circular focus spot within the interaction zone. Beyond the optical design, the approach is challenging with respect to the control theory's point of view, as both the beam deflection and the compensation have to be synchronized.

  14. Finite element analysis of low-cost membrane deformable mirrors for high-order adaptive optics

    NASA Astrophysics Data System (ADS)

    Winsor, Robert S.; Sivaramakrishnan, Anand; Makidon, Russell B.

    1999-10-01

    We demonstrate the feasibility of glass membrane deformable mirror (DM) support structures intended for very high order low-stroke adaptive optics systems. We investigated commercially available piezoelectric ceramics. Piezoelectric tubes were determined to offer the largest amount of stroke for a given amount of space on the mirror surface that each actuator controls. We estimated the minimum spacing and the maximum expected stroke of such actuators. We developed a quantitative understanding of the response of a membrane mirror surface by performing a Finite Element Analysis (FEA) study. The results of the FEA analysis were used to develop a design and fabrication process for membrane deformable mirrors of 200 - 500 micron thicknesses. Several different values for glass thickness and actuator spacing were analyzed to determine the best combination of actuator stoke and surface deformation quality. We considered two deformable mirror configurations. The first configuration uses a vacuum membrane attachment system where the actuator tubes' central holes connect to an evacuated plenum, and atmospheric pressure holds the membrane against the actuators. This configuration allows the membrane to be removed from the actuators, facilitating easy replacement of the glass. The other configuration uses precision bearing balls epoxied to the ends of the actuator tubes, with the glass membrane epoxied to the ends of the ball bearings. While this kind of DM is not serviceable, it allows actuator spacings of 4 mm, in addition to large stroke. Fabrication of a prototype of the latter kind of DM was started.

  15. Complex lung motion estimation via adaptive bilateral filtering of the deformation field.

    PubMed

    Papiez, Bartlomiej W; Heinrich, Mattias Paul; Risser, Laurent; Schnabel, Julia A

    2013-01-01

    Estimation of physiologically plausible deformations is critical for several medical applications. For example, lung cancer diagnosis and treatment requires accurate image registration which preserves sliding motion in the pleural cavity, and the rigidity of chest bones. This paper addresses these challenges by introducing a novel approach for regularisation of non-linear transformations derived from a bilateral filter. For this purpose, the classic Gaussian kernel is replaced by a new kernel that smoothes the estimated deformation field with respect to the spatial position, intensity and deformation dissimilarity. The proposed regularisation is a spatially adaptive filter that is able to preserve discontinuity between the lungs and the pleura and reduces any rigid structures deformations in volumes. Moreover, the presented framework is fully automatic and no prior knowledge of the underlying anatomy is required. The performance of our novel regularisation technique is demonstrated on phantom data for a proof of concept as well as 3D inhale and exhale pairs of clinical CT lung volumes. The results of the quantitative evaluation exhibit a significant improvement when compared to the corresponding state-of-the-art method using classic Gaussian smoothing.

  16. Development of a deformable dosimetric phantom to verify dose accumulation algorithms for adaptive radiotherapy

    PubMed Central

    Zhong, Hualiang; Adams, Jeffrey; Glide-Hurst, Carri; Zhang, Hualin; Li, Haisen; Chetty, Indrin J.

    2016-01-01

    Adaptive radiotherapy may improve treatment outcomes for lung cancer patients. Because of the lack of an effective tool for quality assurance, this therapeutic modality is not yet accepted in clinic. The purpose of this study is to develop a deformable physical phantom for validation of dose accumulation algorithms in regions with heterogeneous mass. A three-dimensional (3D) deformable phantom was developed containing a tissue-equivalent tumor and heterogeneous sponge inserts. Thermoluminescent dosimeters (TLDs) were placed at multiple locations in the phantom each time before dose measurement. Doses were measured with the phantom in both the static and deformed cases. The deformation of the phantom was actuated by a motor driven piston. 4D computed tomography images were acquired to calculate 3D doses at each phase using Pinnacle and EGSnrc/DOSXYZnrc. These images were registered using two registration software packages: VelocityAI and Elastix. With the resultant displacement vector fields (DVFs), the calculated 3D doses were accumulated using a mass-and energy congruent mapping method and compared to those measured by the TLDs at four typical locations. In the static case, TLD measurements agreed with all the algorithms by 1.8% at the center of the tumor volume and by 4.0% in the penumbra. In the deformable case, the phantom's deformation was reproduced within 1.1 mm. For the 3D dose calculated by Pinnacle, the total dose accumulated with the Elastix DVF agreed well to the TLD measurements with their differences <2.5% at four measured locations. When the VelocityAI DVF was used, their difference increased up to 11.8%. For the 3D dose calculated by EGSnrc/DOSXYZnrc, the total doses accumulated with the two DVFs were within 5.7% of the TLD measurements which are slightly over the rate of 5% for clinical acceptance. The detector-embedded deformable phantom allows radiation dose to be measured in a dynamic environment, similar to deforming lung tissues, supporting

  17. Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow. Degree awarded by Virginia Polytechnic Inst. and State Univ., 9 Nov. 2001

    NASA Technical Reports Server (NTRS)

    Wood, William A., III

    2002-01-01

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

  18. Cosmology on a Mesh

    NASA Astrophysics Data System (ADS)

    Gill, Stuart P. D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.

    2003-04-01

    An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological "market" today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.

  19. Hard X-ray nanofocusing using adaptive focusing optics based on piezoelectric deformable mirrors

    SciTech Connect

    Goto, Takumi; Nakamori, Hiroki; Sano, Yasuhisa; Matsuyama, Satoshi; Kimura, Takashi; Kohmura, Yoshiki; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya

    2015-04-15

    An adaptive Kirkpatrick–Baez mirror focusing optics based on piezoelectric deformable mirrors was constructed at SPring-8 and its focusing performance characteristics were demonstrated. By adjusting the voltages applied to the deformable mirrors, the shape errors (compared to a target elliptical shape) were finely corrected on the basis of the mirror shape determined using the pencil-beam method, which is a type of at-wavelength figure metrology in the X-ray region. The mirror shapes were controlled with a peak-to-valley height accuracy of 2.5 nm. A focused beam with an intensity profile having a full width at half maximum of 110 × 65 nm (V × H) was achieved at an X-ray energy of 10 keV.

  20. Hard X-ray nanofocusing using adaptive focusing optics based on piezoelectric deformable mirrors.

    PubMed

    Goto, Takumi; Nakamori, Hiroki; Kimura, Takashi; Sano, Yasuhisa; Kohmura, Yoshiki; Tamasaku, Kenji; Yabashi, Makina; Ishikawa, Tetsuya; Yamauchi, Kazuto; Matsuyama, Satoshi

    2015-04-01

    An adaptive Kirkpatrick-Baez mirror focusing optics based on piezoelectric deformable mirrors was constructed at SPring-8 and its focusing performance characteristics were demonstrated. By adjusting the voltages applied to the deformable mirrors, the shape errors (compared to a target elliptical shape) were finely corrected on the basis of the mirror shape determined using the pencil-beam method, which is a type of at-wavelength figure metrology in the X-ray region. The mirror shapes were controlled with a peak-to-valley height accuracy of 2.5 nm. A focused beam with an intensity profile having a full width at half maximum of 110 × 65 nm (V × H) was achieved at an X-ray energy of 10 keV.

  1. Controlling depth of focus in 3D image reconstructions by flexible and adaptive deformation of digital holograms.

    PubMed

    Ferraro, P; Paturzo, M; Memmolo, P; Finizio, A

    2009-09-15

    We show here that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in 3D imaging reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in focus, and in one reconstructed image plane, different objects lying at different distances from the hologram plane (i.e., CCD sensor). In the same way, by adapting the deformation it is possible to extend the depth of field having a tilted object entirely in focus. We demonstrate the method in both lensless as well as in microscope configuration.

  2. Control of the unilluminated deformable mirror actuators in an altitude-conjugated adaptive optics system

    PubMed

    Veran

    2000-07-01

    Off-axis observations made with adaptive optics are severely limited by anisoplanatism errors. However, conjugating the deformable mirror to an optimal altitude can reduce these errors; it is then necessary to control, through extrapolation, actuators that are not measured by the wave-front sensor (unilluminated actuators). In this study various common extrapolation schemes are investigated, and an optimal method that achieves a significantly better performance is proposed. This extrapolation method involves a simple matrix multiplication and will be implemented in ALTAIR, the Gemini North Telescope adaptive optics system located on Mauna Kea, Hawaii. With this optimal method, the relative H-band Strehl reduction due to extrapolation errors is only 5%, 16%, and 30% when the angular distance between the guide source and the science target is 20, 40 and 60 arc sec, respectively. For a site such as Mauna Kea, these errors are largely outweighed by the increase in the size of the isoplanatic field.

  3. Image based deformable mirror control for adaptive optics in satellite telescope

    NASA Astrophysics Data System (ADS)

    Miyamura, Norihide

    2012-07-01

    We are developing an adaptive optics system for earth observing remote sensing sensor. In this system, high spatial resolution has to be achieved by a lightweight sensor system due to the launcher’s requirements. Moreover, simple hardware architecture has to be selected to achieve high reliability. Image based AOS realize these requirements without wavefront sensor. In remote sensing, it is difficult to use a reference point source unless the satellite controls its attitude toward a star or it has a reference point source in itself. We propose the control algorithm of the deformable mirror on the basis of the extended scene instead of the point source. In our AOS, a cost function is defined using acquired images on the basis of the contrast in spatial or Fourier domain. The cost function is optimized varying the input signal of each actuator of the deformable mirror. In our system, the deformable mirror has 140 actuators. We use basis functions to reduce the number of the input parameters to realize real-time control. We constructed the AOS for laboratory test, and proved that the modulated wavefront by DM almost consists with the ideal one by directly measured using a Shack- Hartmann wavefront sensor as a reference.

  4. Adaptive Optics: Arroyo Simulation Tool and Deformable Mirror Actuation Using Golay Cells

    NASA Technical Reports Server (NTRS)

    Lint, Adam S.

    2005-01-01

    The Arroyo C++ libraries, written by Caltech post-doc student Matthew Britton, have the ability to simulate optical systems and atmospheric signal interference. This program was chosen for use in an end-to-end simulation model of a laser communication system because it is freely distributed and has the ability to be controlled by a remote system or "smart agent." Proposed operation of this program by a smart agent has been demonstrated, and the results show it to be a suitable simulation tool. Deformable mirrors, as a part of modern adaptive optics systems, may contain thousands of tiny, independently controlled actuators used to modify the shape of the mirror. Each actuator is connected to two wires, creating a cumbersome and expensive device. Recently, an alternative actuation method that uses gas-filled tubes known as Golay cells has been explored. Golay cells, operated by infrared lasers instead of electricity, would replace the actuator system thereby creating a more compact deformable mirror. The operation of Golay cells and their ability to move a deformable mirror in excess of the required 20 microns has been demonstrated. Experimentation has shown them to be extremely sensitive to pressure and temperature, making them ideal for use in a controlled environment.

  5. TU-AB-303-11: Predict Parotids Deformation Applying SIS Epidemiological Model in H&N Adaptive RT

    SciTech Connect

    Maffei, N; Guidi, G; Vecchi, C; Bertoni, F; Costi, T

    2015-06-15

    Purpose: The aim is to investigate the use of epidemiological models to predict morphological variations in patients undergoing radiation therapy (RT). The susceptible-infected-susceptible (SIS) deterministic model was applied to simulate warping within a focused region of interest (ROI). Hypothesis is to consider each voxel like a single subject of the whole sample and to treat displacement vector fields like an infection. Methods: Using Raystation hybrid deformation algorithms and automatic re-contouring based on mesh grid, we post-processed 360 MVCT images of 12 H&N patients treated with Tomotherapy. Study focused on parotid glands, identified by literature and previous analysis, as ROI more susceptible to warping in H&N region. Susceptible (S) and infectious (I) cases were identified in voxels with inter-fraction movement respectively under and over a set threshold. IronPython scripting allowed to export positions and displacement data of surface voxels for every fraction. A MATLAB homemade toolbox was developed to model the SIS. Results: SIS model was validated simulating organ motion on QUASAR phantom. Applying model in patients, within a [0–1cm] range, a single voxel movement of 0.4cm was selected as displacement threshold. SIS indexes were evaluated by MATLAB simulations. Dynamic time warping algorithm was used to assess matching between model and parotids behavior days of treatments. The best fit of the model was obtained with contact rate of 7.89±0.94 and recovery rate of 2.36±0.21. Conclusion: SIS model can follow daily structures evolutions, making possible to compare warping conditions and highlighting challenges due to abnormal variation and set-up errors. By epidemiology approach, organ motion could be assessed and predicted not in terms of average of the whole ROI, but in a voxel-by-voxel deterministic trend. Identifying anatomical region subjected to variations, would be possible to focus clinic controls within a cohort of pre-selected patients

  6. The need for application-based adaptation of deformable image registration

    SciTech Connect

    Kirby, Neil; Chuang, Cynthia; Ueda, Utako; Pouliot, Jean

    2013-01-15

    Purpose: To utilize a deformable phantom to objectively evaluate the accuracy of 11 different deformable image registration (DIR) algorithms. Methods: The phantom represents an axial plane of the pelvic anatomy. Urethane plastic serves as the bony anatomy and urethane rubber with three levels of Hounsfield units (HU) is used to represent fat and organs, including the prostate. A plastic insert is placed into the phantom to simulate bladder filling. Nonradiopaque markers reside on the phantom surface. Optical camera images of these markers are used to measure the positions and determine the deformation from the bladder insert. Eleven different DIR algorithms are applied to the full and empty-bladder computed tomography images of the phantom (fixed and moving volumes, respectively) to calculate the deformation. The algorithms include those from MIM Software (MIM) and Velocity Medical Solutions (VEL) and nine different implementations from the deformable image registration and adaptive radiotherapy toolbox for Matlab. These algorithms warp one image to make it similar to another, but must utilize a method for regularization to avoid physically unrealistic deformation scenarios. The mean absolute difference (MAD) between the HUs at the marker locations on one image and the calculated location on the other serves as a metric to evaluate the balance between image similarity and regularization. To demonstrate the effect of regularization on registration accuracy, an additional beta version of MIM was created with a variable smoothness factor that controls the emphasis of the algorithm on regularization. The distance to agreement between the measured and calculated marker deformations is used to compare the overall spatial accuracy of the DIR algorithms. This overall spatial accuracy is also utilized to evaluate the phantom geometry and the ability of the phantom soft-tissue heterogeneity to represent patient data. To evaluate the ability of the DIR algorithms to

  7. Polish adaptation of Bad Sobernheim Stress Questionnaire-Brace and Bad Sobernheim Stress Questionnaire-Deformity.

    PubMed

    Misterska, Ewa; Głowacki, Maciej; Harasymczuk, Jerzy

    2009-12-01

    Bad Sobernheim Stress Questionnaire-Brace and Bad Sobernheim Stress Questionnaire-Deformity are relatively new tools aimed at facilitating the evaluation of long-term results of therapy in persons with idiopathic scoliosis undergoing conservative treatment. To use these tools properly in Poland, they must be translated into Polish and adapted to the Polish cultural settings. The process of cultural adaptation of the questionnaires was compliant with the guidelines of International Quality of Life Assessment (IQOLA) Project. In the first stage, two independent translators converted the originals into Polish. Stage two, consisted of a comparison of the originals and two translated versions. During that stage, the team of two translators and authors of the project identified differences in those translations and created a combination of the two. In the third stage, two independent translators, who were native speakers of German, translated the adjusted version of the Polish translation into the language of the original document. At the last stage, a commission composed of: specialists in orthopedics, translators, a statistician and a psychologist reviewed all translations and drafted a pre-final version of the questionnaires. Thirty-five adolescent girls with idiopathic scoliosis who were treated with Cheneau brace were subjected to the questionnaire assessment. All patients were treated in an out-patient setting by a specialist in orthopedics at the Chair and Clinic of Orthopedics and Traumatology. Median age of patients was 14.8 SD 1.5, median value of the Cobb's angle was 27.8 degrees SD 7.4. 48.6% of patients had thoracic scoliosis, 31.4% had thoracolumbar scoliosis, and 20% patients had lumbar scoliosis. Median results obtained by means of the Polish version of BSSQ-Brace and BSSQ-Deformity questionnaires were 17.9 SD 5.0 and 11.3 SD 4.7, respectively. Internal consistency of BSSQ-Brace and BSSQ-Deformity was at the level of 0.80 and 0.87, whereas the value of

  8. Characterizing the potential of MEMS deformable mirrors for astronomical adaptive optics

    NASA Astrophysics Data System (ADS)

    Morzinski, Katie M.; Evans, Julia W.; Severson, Scott; Macintosh, Bruce; Dillon, Daren; Gavel, Don; Max, Claire; Palmer, Dave

    2006-06-01

    Current high-contrast "extreme" adaptive optics (ExAO) systems are partially limited by deformable mirror technology. Mirror requirements specify thousands of actuators, all of which must be functional within the clear aperture, and which give nanometer flatness yet micron stroke when operated in closed loop.1 Micro-electrical mechanical-systems (MEMS) deformable mirrors have been shown to meet ExAO actuator yield, wavefront error, and cost considerations. This study presents the performance of Boston Micromachines' 1024-actuator continuous-facesheet MEMS deformable mirrors under tests for actuator stability, position repeatability, and practical operating stroke. To explore whether MEMS actuators are susceptible to temporal variation, a series of long-term stability experiments were conducted. Each actuator was held fixed and the motion over 40 minutes was measured. The median displacement of all the actuators tested was 0.08 nm surface, inclusive of system error. MEMS devices are also appealing for adaptive optics architectures based on open-loop correction. In experiments of actuator position repeatability, 100% of the tested actuators returned repeatedly to their starting point with a precision of < 1 nm surface. Finally, MEMS devices were tested for maximum stroke achieved under application of spatially varying one-dimensional sinusoids. Given a specified amplitude in voltage, the measured stroke was 1 μm surface at the low spatial frequencies, decreasing to 0.2 μm surface for the highest spatial frequency. Stroke varied somewhat linearly as inverse spatial frequency, with a flattening in the relation at the high spatial frequency end.

  9. Accelerated gradient-based free form deformable registration for online adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Yu, Gang; Liang, Yueqiang; Yang, Guanyu; Shu, Huazhong; Li, Baosheng; Yin, Yong; Li, Dengwang

    2015-04-01

    The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a ‘bi-directional’ force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software.

  10. Technical Note: DIRART- A software suite for deformable image registration and adaptive radiotherapy research

    SciTech Connect

    Yang Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-15

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods: DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research.

  11. Three dimensional hydrodynamic calculations with adaptive mesh refinement of the evolution of Rayleigh Taylor and Richtmyer Meshkov instabilities in converging geometry: Multi-mode perturbations

    SciTech Connect

    Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.

    1993-04-01

    The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.

  12. Experience with wavefront sensor and deformable mirror interfaces for wide-field adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Basden, A. G.; Atkinson, D.; Bharmal, N. A.; Bitenc, U.; Brangier, M.; Buey, T.; Butterley, T.; Cano, D.; Chemla, F.; Clark, P.; Cohen, M.; Conan, J.-M.; de Cos, F. J.; Dickson, C.; Dipper, N. A.; Dunlop, C. N.; Feautrier, P.; Fusco, T.; Gach, J. L.; Gendron, E.; Geng, D.; Goodsell, S. J.; Gratadour, D.; Greenaway, A. H.; Guesalaga, A.; Guzman, C. D.; Henry, D.; Holck, D.; Hubert, Z.; Huet, J. M.; Kellerer, A.; Kulcsar, C.; Laporte, P.; Le Roux, B.; Looker, N.; Longmore, A. J.; Marteaud, M.; Martin, O.; Meimon, S.; Morel, C.; Morris, T. J.; Myers, R. M.; Osborn, J.; Perret, D.; Petit, C.; Raynaud, H.; Reeves, A. P.; Rousset, G.; Sanchez Lasheras, F.; Sanchez Rodriguez, M.; Santos, J. D.; Sevin, A.; Sivo, G.; Stadler, E.; Stobie, B.; Talbot, G.; Todd, S.; Vidal, F.; Younger, E. J.

    2016-06-01

    Recent advances in adaptive optics (AO) have led to the implementation of wide field-of-view AO systems. A number of wide-field AO systems are also planned for the forthcoming Extremely Large Telescopes. Such systems have multiple wavefront sensors of different types, and usually multiple deformable mirrors (DMs). Here, we report on our experience integrating cameras and DMs with the real-time control systems of two wide-field AO systems. These are CANARY, which has been operating on-sky since 2010, and DRAGON, which is a laboratory AO real-time demonstrator instrument. We detail the issues and difficulties that arose, along with the solutions we developed. We also provide recommendations for consideration when developing future wide-field AO systems.

  13. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  14. Multigrid techniques for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  15. Accurate Adaptive Level Set Method and Sharpening Technique for Three Dimensional Deforming Interfaces

    NASA Technical Reports Server (NTRS)

    Kim, Hyoungin; Liou, Meng-Sing

    2011-01-01

    In this paper, we demonstrate improved accuracy of the level set method for resolving deforming interfaces by proposing two key elements: (1) accurate level set solutions on adapted Cartesian grids by judiciously choosing interpolation polynomials in regions of different grid levels and (2) enhanced reinitialization by an interface sharpening procedure. The level set equation is solved using a fifth order WENO scheme or a second order central differencing scheme depending on availability of uniform stencils at each grid point. Grid adaptation criteria are determined so that the Hamiltonian functions at nodes adjacent to interfaces are always calculated by the fifth order WENO scheme. This selective usage between the fifth order WENO and second order central differencing schemes is confirmed to give more accurate results compared to those in literature for standard test problems. In order to further improve accuracy especially near thin filaments, we suggest an artificial sharpening method, which is in a similar form with the conventional re-initialization method but utilizes sign of curvature instead of sign of the level set function. Consequently, volume loss due to numerical dissipation on thin filaments is remarkably reduced for the test problems

  16. toolkit computational mesh conceptual model.

    SciTech Connect

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  17. Modeling, mesh generation and adaptive numerical methods for partial differential equations: IMA summer program. Final report, April 1, 1993--March 31, 1994

    SciTech Connect

    Friedman, A.; Miller, W. Jr.

    1993-12-31

    The program was divided into segments: (Week 1) geometric modeling and mesh generation (Weeks 2 and 3) error estimation and adaptive strategies. Participants in the program came from a wide variety of disciplines dealing with remarkably analogous problems in this area. Ideas were exchanged and interdisciplinary collaboration was initiated in informal contexts as well as in the talks and question periods. In the talks, a number of algorithms were described along with specific applications to problems of great current interest in various scientific and engineering disciplines. In this emerging field, participants developed criteria for evaluation of algorithms and established guidelines for selection of algorithms appropriate to any specific problem. Special features of a problem may include curved surfaces, complicated boundaries, evolving interfaces (such as occur in coating flows), and/or criticality of error estimation.

  18. Unstructured mesh algorithms for aerodynamic calculations

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1992-01-01

    The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.

  19. Adaptive Optics System with Deformable Composite Mirror and High Speed, Ultra-Compact Electronics

    NASA Astrophysics Data System (ADS)

    Chen, Peter C.; Knowles, G. J.; Shea, B. G.

    2006-06-01

    We report development of a novel adaptive optics system for optical astronomy. Key components are very thin Deformable Mirrors (DM) made of fiber reinforced polymer resins, subminiature PMN-PT actuators, and low power, high bandwidth electronics drive system with compact packaging and minimal wiring. By using specific formulations of fibers, resins, and laminate construction, we are able to fabricate mirror face sheets that are thin (< 2mm), have smooth surfaces and excellent optical shape. The mirrors are not astigmatic and do not develop surface irregularities when cooled. The actuators are small footprint multilayer PMN-PT ceramic devices with large stroke (2- 20 microns), high linearity, low hysteresis, low power, and flat frequency response to >2 KHz. By utilizing QorTek’s proprietary synthetic impendence power supply technology, all the power, control, and signal extraction for many hundreds to 1000s of actuators and sensors can be implemented on a single matrix controller printed circuit board co-mounted with the DM. The matrix controller, in turn requires only a single serial bus interface, thereby obviating the need for massive wiring harnesses. The technology can be scaled up to multi-meter aperture DMs with >100K actuators.

  20. A Freestream-Preserving High-Order Finite-Volume Method for Mapped Grids with Adaptive-Mesh Refinement

    SciTech Connect

    Guzik, S; McCorquodale, P; Colella, P

    2011-12-16

    A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.

  1. Evaluation of Deformable Image Coregistration in Adaptive Dose Painting by Numbers for Head-and-Neck Cancer

    SciTech Connect

    Olteanu, Luiza A.M.; Madani, Indira; De Neve, Wilfried; Vercauteren, Tom; De Gersem, Werner

    2012-06-01

    Purpose: To assess the accuracy of contour deformation and feasibility of dose summation applying deformable image coregistration in adaptive dose painting by numbers (DPBN) for head and neck cancer. Methods and Materials: Data of 12 head-and-neck-cancer patients treated within a Phase I trial on adaptive {sup 18}F-FDG positron emission tomography (PET)-guided DPBN were used. Each patient had two DPBN treatment plans: the initial plan was based on a pretreatment PET/CT scan; the second adapted plan was based on a PET/CT scan acquired after 8 fractions. The median prescription dose to the dose-painted volume was 30 Gy for both DPBN plans. To obtain deformed contours and dose distributions, pretreatment CT was deformed to per-treatment CT using deformable image coregistration. Deformed contours of regions of interest (ROI{sub def}) were visually inspected and, if necessary, adjusted (ROI{sub def{sub ad}}) and both compared with manually redrawn ROIs (ROI{sub m}) using Jaccard (JI) and overlap indices (OI). Dose summation was done on the ROI{sub m}, ROI{sub def{sub ad}}, or their unions with the ROI{sub def}. Results: Almost all deformed ROIs were adjusted. The largest adjustment was made in patients with substantially regressing tumors: ROI{sub def} = 11.8 {+-} 10.9 cm{sup 3} vs. ROI{sub def{sub ad}} = 5.9 {+-} 7.8 cm{sup 3} vs. ROI{sub m} = 7.7 {+-} 7.2 cm{sup 3} (p = 0.57). The swallowing structures were the most frequently adjusted ROIs with the lowest indices for the upper esophageal sphincter: JI = 0.3 (ROI{sub def}) and 0.4 (ROI{sub def{sub ad}}); OI = 0.5 (both ROIs). The mandible needed the least adjustment with the highest indices: JI = 0.8 (both ROIs), OI = 0.9 (ROI{sub def}), and 1.0 (ROI{sub def{sub ad}}). Summed doses differed non-significantly. There was a trend of higher doses in the targets and lower doses in the spinal cord when doses were summed on unions. Conclusion: Visual inspection and adjustment were necessary for most ROIs. Fast automatic ROI

  2. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  3. Robust Wave-front Correction in a Small Scale Adaptive Optics System Using a Membrane Deformable Mirror

    NASA Astrophysics Data System (ADS)

    Choi, Y.; Park, S.; Baik, S.; Jung, J.; Lee, S.; Yoo, J.

    A small scale laboratory adaptive optics system using a Shack-Hartmann wave-front sensor (WFS) and a membrane deformable mirror (DM) has been built for robust image acquisition. In this study, an adaptive limited control technique is adopted to maintain the long-term correction stability of an adaptive optics system. To prevent the waste of dynamic correction range for correcting small residual wave-front distortions which are inefficient to correct, the built system tries to limit wave-front correction when a similar small difference wave-front pattern is repeatedly generated. Also, the effect of mechanical distortion in an adaptive optics system is studied and a pre-recognition method for the distortion is devised to prevent low-performance system operation. A confirmation process for a balanced work assignment among deformable mirror (DM) actuators is adopted for the pre-recognition. The corrected experimental results obtained by using a built small scale adaptive optics system are described in this paper.

  4. Mersiline mesh in premaxillary augmentation.

    PubMed

    Foda, Hossam M T

    2005-01-01

    Premaxillary retrusion may distort the aesthetic appearance of the columella, lip, and nasal tip. This defect is characteristically seen in, but not limited to, patients with cleft lip nasal deformity. This study investigated 60 patients presenting with premaxillary deficiencies in which Mersiline mesh was used to augment the premaxilla. All the cases had surgery using the external rhinoplasty technique. Two methods of augmentation with Mersiline mesh were used: the Mersiline roll technique, for the cases with central symmetric deficiencies, and the Mersiline packing technique, for the cases with asymmetric deficiencies. Premaxillary augmentation with Mersiline mesh proved to be simple technically, easy to perform, and not associated with any complications. Periodic follow-up evaluation for a mean period of 32 months (range, 12-98 months) showed that an adequate degree of premaxillary augmentation was maintained with no clinically detectable resorption of the mesh implant.

  5. A correction algorithm to simultaneously control dual deformable mirrors in a woofer-tweeter adaptive optics system

    PubMed Central

    Li, Chaohong; Sredar, Nripun; Ivers, Kevin M.; Queener, Hope; Porter, Jason

    2010-01-01

    We present a direct slope-based correction algorithm to simultaneously control two deformable mirrors (DMs) in a woofer-tweeter adaptive optics system. A global response matrix was derived from the response matrices of each deformable mirror and the voltages for both deformable mirrors were calculated simultaneously. This control algorithm was tested and compared with a 2-step sequential control method in five normal human eyes using an adaptive optics scanning laser ophthalmoscope. The mean residual total root-mean-square (RMS) wavefront errors across subjects after adaptive optics (AO) correction were 0.128 ± 0.025 μm and 0.107 ± 0.033 μm for simultaneous and 2-step control, respectively (7.75-mm pupil). The mean intensity of reflectance images acquired after AO convergence was slightly higher for 2-step control. Radially-averaged power spectra calculated from registered reflectance images were nearly identical for all subjects using simultaneous or 2-step control. The correction performance of our new simultaneous dual DM control algorithm is comparable to 2-step control, but is more efficient. This method can be applied to any woofer-tweeter AO system. PMID:20721058

  6. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2002-10-19

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  7. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2004-01-28

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.

  8. Stabilised dG-FEM for incompressible natural convection flows with boundary and moving interior layers on non-adapted meshes

    NASA Astrophysics Data System (ADS)

    Schroeder, Philipp W.; Lube, Gert

    2017-04-01

    This paper presents heavily grad-div and pressure jump stabilised, equal- and mixed-order discontinuous Galerkin finite element methods for non-isothermal incompressible flows based on the Oberbeck-Boussinesq approximation. In this framework, the enthalpy-porosity model for multiphase flow in melting and solidification problems can be employed. By considering the differentially heated cavity and the melting of pure gallium in a rectangular enclosure, it is shown that both boundary layers and sharp moving interior layers can be handled naturally by the proposed class of non-conforming methods. Due to the stabilising effect of the grad-div term and the robustness of discontinuous Galerkin methods, it is possible to solve the underlying problems accurately on coarse, non-adapted meshes. The interaction of heavy grad-div stabilisation and discontinuous Galerkin methods significantly improves the mass conservation properties and the overall accuracy of the numerical scheme which is observed for the first time. Hence, it is inferred that stabilised discontinuous Galerkin methods are highly robust as well as computationally efficient numerical methods to deal with natural convection problems arising in incompressible computational thermo-fluid dynamics.

  9. A Stable, Accurate Methodology for High Mach Number, Strong Magnetic Field MHD Turbulence with Adaptive Mesh Refinement: Resolution and Refinement Studies

    NASA Astrophysics Data System (ADS)

    Li, Pak Shing; Martin, Daniel F.; Klein, Richard I.; McKee, Christopher F.

    2012-02-01

    Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma β0 of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers ({{\\cal M}_rms}= 17.3) and smaller plasma beta (β0 = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.

  10. A STABLE, ACCURATE METHODOLOGY FOR HIGH MACH NUMBER, STRONG MAGNETIC FIELD MHD TURBULENCE WITH ADAPTIVE MESH REFINEMENT: RESOLUTION AND REFINEMENT STUDIES

    SciTech Connect

    Li, Pak Shing; Klein, Richard I.; Martin, Daniel F.; McKee, Christopher F. E-mail: klein@astron.berkeley.edu E-mail: cmckee@astro.berkeley.edu

    2012-02-01

    Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma {beta}{sub 0} of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers (M{sub rms}= 17.3) and smaller plasma beta ({beta}{sub 0} = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.

  11. A density driven mesh generator guided by a neural network

    SciTech Connect

    Lowther, D.A.; Dyck, D.N. )

    1993-03-01

    A neural network guided mesh generator is described. The mesh generator used density information provided by the neural network to determine the size and placement of elements. This system is coupled with an adaptive meshing and solving process and is shown to have major computational benefits compared with adaptation alone.

  12. 6th International Meshing Roundtable '97

    SciTech Connect

    White, D.

    1997-09-01

    The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.

  13. Quadrilateral/hexahedral finite element mesh coarsening

    DOEpatents

    Staten, Matthew L; Dewey, Mark W; Scott, Michael A; Benzley, Steven E

    2012-10-16

    A technique for coarsening a finite element mesh ("FEM") is described. This technique includes identifying a coarsening region within the FEM to be coarsened. Perimeter chords running along perimeter boundaries of the coarsening region are identified. The perimeter chords are redirected to create an adaptive chord separating the coarsening region from a remainder of the FEM. The adaptive chord runs through mesh elements residing along the perimeter boundaries of the coarsening region. The adaptive chord is then extracted to coarsen the FEM.

  14. 3D Adaptive Mesh Refinement Simulations of the Gas Cloud G2 Born within the Disks of Young Stars in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position-velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  15. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    SciTech Connect

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  16. The role of regularization in deformable image registration for head and neck adaptive radiotherapy.

    PubMed

    Ciardo, D; Peroni, M; Riboldi, M; Alterio, D; Baroni, G; Orecchia, R

    2013-08-01

    Deformable image registration provides a robust mathematical framework to quantify morphological changes that occur along the course of external beam radiotherapy treatments. As clinical reliability of deformable image registration is not always guaranteed, algorithm regularization is commonly introduced to prevent sharp discontinuities in the quantified deformation and achieve anatomically consistent results. In this work we analyzed the influence of regularization on two different registration methods, i.e. B-Splines and Log Domain Diffeomorphic Demons, implemented in an open-source platform. We retrospectively analyzed the simulation computed tomography (CTsim) and the corresponding re-planning computed tomography (CTrepl) scans in 30 head and neck cancer patients. First, we investigated the influence of regularization levels on hounsfield units (HU) information in 10 test patients for each considered method. Then, we compared the registration results of the open-source implementation at selected best performing regularization levels with a clinical commercial software on the remaining 20 patients in terms of mean volume overlap, surface and center of mass distances between manual outlines and propagated structures. The regularized B-Splines method was not statistically different from the commercial software. The tuning of the regularization parameters allowed open-source algorithms to achieve better results in deformable image registration for head and neck patients, with the additional benefit of a framework where regularization can be tuned on a patient specific basis.

  17. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  18. SU-E-J-254: Utility of Pinnacle Dynamic Planning Module Utilizing Deformable Image Registration in Adaptive Radiotherapy

    SciTech Connect

    Jani, S

    2014-06-01

    Purpose For certain highly conformal treatment techniques, changes in patient anatomy due to weight loss and/or tumor shrinkage can result in significant changes in dose distribution. Recently, the Pinnacle treatment planning system added a Dynamic Planning module utilizing Deformable Image Registration (DIR). The objective of this study was to evaluate the effectiveness of this software in adapting to altered anatomy and adjusting treatment plans to account for it. Methods We simulated significant tumor response by changing patient thickness and altered chin positions using a commercially-available head and neck (H and N) phantom. In addition, we studied 23 CT image sets of fifteen (15) patients with H and N tumors and eight (8) patients with prostate cancer. In each case, we applied deformable image registration through Dynamic Planning module of our Pinnacle Treatment Planning System. The dose distribution of the original CT image set was compared to the newly computed dose without altering any treatment parameter. Result was a dose if we did not adjust the plan to reflect anatomical changes. Results For the H and N phantom, a tumor response of up to 3.5 cm was correctly deformed by the Pinnacle Dynamic module. Recomputed isodose contours on new anatomies were within 1 mm of the expected distribution. The Pinnacle system configuration allowed dose computations resulting from original plans on new anatomies without leaving the planning system. Original and new doses were available side-by-side with both CT image sets. Based on DIR, about 75% of H and N patients (11/15) required a re-plan using new anatomy. Among prostate patients, the DIR predicted near-correct bladder volume in 62% of the patients (5/8). Conclusions The Dynamic Planning module of the Pinnacle system proved to be an accurate and useful tool in our ability to adapt to changes in patient anatomy during a course of radiotherapy.

  19. Spherical geodesic mesh generation

    SciTech Connect

    Fung, Jimmy; Kenamond, Mark Andrew; Burton, Donald E.; Shashkov, Mikhail Jurievich

    2015-02-27

    In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.

  20. Study of muscular deformation based on surface slope estimation

    NASA Astrophysics Data System (ADS)

    Carli, M.; Goffredo, M.; Schmid, M.; Neri, A.

    2006-02-01

    During contraction and stretching, muscles change shape and size, and produce a deformation of skin tissues and a modification of the body segment shape. In human motion analysis, it is very important to take into account these phenomena. The aim of this work is the evaluation of skin and muscular deformation, and the modeling of body segment elastic behavior obtained by analysing video sequences that capture a muscle contraction. The soft tissue modeling is accomplished by using triangular meshes that automatically adapt to the body segment during the execution of a static muscle contraction. The adaptive triangular mesh is built on reference points whose motion is estimated by using non linear operators. Experimental results, obtained by applying the proposed method to several video sequences, where biceps brachial isometric contraction was present, show the effectiveness of this technique.

  1. Conformal refinement of unstructured quadrilateral meshes

    SciTech Connect

    Garmella, Rao

    2009-01-01

    We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.

  2. Mesh infrastructure for coupled multiprocess geophysical simulations

    SciTech Connect

    Garimella, Rao V.; Perkins, William A.; Buksas, Mike W.; Berndt, Markus; Lipnikov, Konstantin; Coon, Ethan; Moulton, John D.; Painter, Scott L.

    2014-01-01

    We have developed a sophisticated mesh infrastructure capability to support large scale multiphysics simulations such as subsurface flow and reactive contaminant transport at storage sites as well as the analysis of the effects of a warming climate on the terrestrial arctic. These simulations involve a wide range of coupled processes including overland flow, subsurface flow, freezing and thawing of ice rich soil, accumulation, redistribution and melting of snow, biogeochemical processes involving plant matter and finally, microtopography evolution due to melting and degradation of ice wedges below the surface. In addition to supporting the usual topological and geometric queries about the mesh, the mesh infrastructure adds capabilities such as identifying columnar structures in the mesh, enabling deforming of the mesh subject to constraints and enabling the simultaneous use of meshes of different dimensionality for subsurface and surface processes. The generic mesh interface is capable of using three different open source mesh frameworks (MSTK, MOAB and STKmesh) under the hood allowing the developers to directly compare them and choose one that is best suited for the application's needs. We demonstrate the results of some simulations using these capabilities as well as present a comparison of the performance of the different mesh frameworks.

  3. Mesh infrastructure for coupled multiprocess geophysical simulations

    DOE PAGES

    Garimella, Rao V.; Perkins, William A.; Buksas, Mike W.; ...

    2014-01-01

    We have developed a sophisticated mesh infrastructure capability to support large scale multiphysics simulations such as subsurface flow and reactive contaminant transport at storage sites as well as the analysis of the effects of a warming climate on the terrestrial arctic. These simulations involve a wide range of coupled processes including overland flow, subsurface flow, freezing and thawing of ice rich soil, accumulation, redistribution and melting of snow, biogeochemical processes involving plant matter and finally, microtopography evolution due to melting and degradation of ice wedges below the surface. In addition to supporting the usual topological and geometric queries about themore » mesh, the mesh infrastructure adds capabilities such as identifying columnar structures in the mesh, enabling deforming of the mesh subject to constraints and enabling the simultaneous use of meshes of different dimensionality for subsurface and surface processes. The generic mesh interface is capable of using three different open source mesh frameworks (MSTK, MOAB and STKmesh) under the hood allowing the developers to directly compare them and choose one that is best suited for the application's needs. We demonstrate the results of some simulations using these capabilities as well as present a comparison of the performance of the different mesh frameworks.« less

  4. Ear Deformations Give Bats a Physical Mechanism for Fast Adaptation of Ultrasonic Beam Patterns

    NASA Astrophysics Data System (ADS)

    Gao, Li; Balakrishnan, Sreenath; He, Weikai; Yan, Zhen; Müller, Rolf

    2011-11-01

    A large number of mammals, including humans, have intricate outer ear shapes that diffract incoming sound in a direction- and frequency-specific manner. Through this physical process, the outer ear shapes encode sound-source information into the sensory signals from each ear. Our results show that horseshoe bats could dynamically control these diffraction processes through fast nonrigid ear deformations. The bats’ ear shapes can alter between extreme configurations in about 100 ms and thereby change their acoustic properties in ways that would suit different acoustic sensing tasks.

  5. Fast Dynamic Meshing Method Based on Delaunay Graph and Inverse Distance Weighting Interpolation

    NASA Astrophysics Data System (ADS)

    Wang, Yibin; Qin, Ning; Zhao, Ning

    2016-06-01

    A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay-graph-mapping mesh deformation while possess the ability for better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each of the sub-domains, the inverse distance weighting interpolation is applied to build a much smaller sized translation matrix between the original mesh and the deformed mesh, resulting a similar efficiency for the mesh deformation as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method while the computational time is compared with the original Delaunay graph mapping method.

  6. MHD simulations on an unstructured mesh

    SciTech Connect

    Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Longcope, D.W.; Sugiyama, L.E.

    1998-12-31

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.

  7. Size-changeable x-ray beam collimation using an adaptive x-ray optical system based on four deformable mirrors

    NASA Astrophysics Data System (ADS)

    Goto, T.; Matsuyama, S.; Nakamori, H.; Hayashi, H.; Sano, Y.; Kohmura, Y.; Yabashi, M.; Ishikawa, T.; Yamauchi, K.

    2016-09-01

    A two-stage adaptive optical system using four piezoelectric deformable mirrors was constructed at SPring-8 to form collimated X-ray beams. The deformable mirrors were finely deformed to target shapes (elliptical for the upstream mirrors and parabolic for the downstream mirrors) based on shape data measured with the X-ray pencil beam scanning method. Ultraprecise control of the mirror shapes enables us to obtain various collimated beams with different beam sizes of 314 μm (358 μm) and 127 μm (65 μm) in the horizontal (vertical) directions, respectively, with parallelism accuracy of 1 μrad rms.

  8. An adaptive patient specific deformable registration for breast images of positron emission tomography and magnetic resonance imaging using finite element approach

    NASA Astrophysics Data System (ADS)

    Xue, Cheng; Tang, Fuk-Hay

    2014-03-01

    A patient specific registration model based on finite element method was investigated in this study. Image registration of Positron Emission Tomography (PET) and Magnetic Resonance imaging (MRI) has been studied a lot. Surface-based registration is extensively applied in medical imaging. We develop and evaluate a registration method combine surface-based registration with biomechanical modeling. .Four sample cases of patients with PET and MRI breast scans performed within 30 days were collected from hospital. K-means clustering algorithm was used to segment images into two parts, which is fat tissue and neoplasm [2]. Instead of placing extrinsic landmarks on patients' body which may be invasive, we proposed a new boundary condition to simulate breast deformation during two screening. Then a three dimensional model with meshes was built. Material properties were assigned to this model according to previous studies. The whole registration was based on a biomechanical finite element model, which could simulate deformation of breast under pressure.

  9. Telescope Wavefront Aberration Compensation with a Deformable Mirror in an Adaptive Optics System

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Chen, Yijiang; Crossfield, Ian

    2005-01-01

    With the goal of reducing the surface wavefront error of low-cost multi-meter-diameter mirrors from about 10 waves peak-to-valley (P-V), at lpm wavelength, to approximately 1-wave or less, we describe a method to compensate for slowly varying wavefront aberrations of telescope mirrors. A deformable mirror is utilized in an active optical compensation system. The kMS wavefront error of a 0.3m telescope improved to 0.05 waves (0.26 waves P-V) from the original value of 1.4 waves RMS (6.5 waves P-V), measured at 633nm, and the Strehl ratio improved to 89% from the original value of 0.08%.

  10. Tuning of patient-specific deformable models using an adaptive evolutionary optimization strategy.

    PubMed

    Vidal, Franck P; Villard, Pierre-Frédéric; Lutton, Evelyne

    2012-10-01

    We present and analyze the behavior of an evolutionary algorithm designed to estimate the parameters of a complex organ behavior model. The model is adaptable to account for patient's specificities. The aim is to finely tune the model to be accurately adapted to various real patient datasets. It can then be embedded, for example, in high fidelity simulations of the human physiology. We present here an application focused on respiration modeling. The algorithm is automatic and adaptive. A compound fitness function has been designed to take into account for various quantities that have to be minimized. The algorithm efficiency is experimentally analyzed on several real test cases: 1) three patient datasets have been acquired with the "breath hold" protocol, and 2) two datasets corresponds to 4-D CT scans. Its performance is compared with two traditional methods (downhill simplex and conjugate gradient descent): a random search and a basic real-valued genetic algorithm. The results show that our evolutionary scheme provides more significantly stable and accurate results.

  11. Some aspects of adapting computational mesh to complex flow domains and structures with application to blown shock layer and base flow

    NASA Technical Reports Server (NTRS)

    Lombard, C. K.; Lombard, M. P.; Menees, G. P.; Yang, J. Y.

    1980-01-01

    Several aspects connected with the notion of computation with flow oriented mesh systems are presented. Simple, effective approaches to the ideas discussed are demonstrated in current applications to blown forebody shock layer flow and full bluff body shock layer flow including the massively separated wake region.

  12. Mesh quality control for multiply-refined tetrahedral grids

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger

    1994-01-01

    A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.

  13. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling

    NASA Astrophysics Data System (ADS)

    Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing

    2016-02-01

    A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.

  14. 4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling

    PubMed Central

    Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing

    2016-01-01

    A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496

  15. A method to individualize adaptive planning target volumes for deformable targets

    NASA Astrophysics Data System (ADS)

    Wright, Pauliina; Redpath, Anthony Thomas; Høyer, Morten; Muren, Ludvig Paul

    2009-12-01

    We have investigated a method to individualize the planning target volume (PTV) for deformable targets in radiotherapy by combining a computer tomography (CT) scan with multiple cone beam (CB)CT scans. All combinations of the CT and up to five initial CBCTs were considered. To exclude translational motion, the clinical target volumes (CTVs) in the CBCTs were matched to the CTV in the CT. PTVs investigated were the unions, the intersections and all other structures defined by a volume with a constant CTV location frequency. The method was investigated for three bladder cancer patients with a CT and 20-27 CBCTs. Reliable alternatives to a standard PTV required use of at least four scans for planning. The CTV unions of four or five scans gave similar results when considering the fraction of individual repeat scan CTVs they volumetrically covered to at least 99%. For patient 1, 64% of the repeat scan CTVs were covered by these unions and for patient 2, 86% were covered. Further, the PTVs defined by the volume occupied by the CTV in all except one of the four or five planning scans seemed clinically feasible. On average, 52% of the repeat CBCT CTVs for patient 1 and 64% for patient 2 were covered to minimum 99% of their total volume. For patient 3, the method failed due to poor volume control of the bladder. The suggested PTVs could, with considerably improved conformity, complement the standard PTV.

  16. Demonstration of a 17 cm robust carbon fiber deformable mirror for adaptive optics

    SciTech Connect

    Ammons, S M; Hart, M; Coughenour, B; Romeo, R; Martin, R; Rademacher, M

    2011-09-12

    Carbon-fiber reinforced polymer (CFRP) composite is an attractive material for fabrication of optics due to its high stiffness-to-weight ratio, robustness, zero coefficient of thermal expansion (CTE), and the ability to replicate multiple optics from the same mandrel. We use 8 and 17 cm prototype CFRP thin-shell deformable mirrors to show that residual CTE variation may be addressed with mounted actuators for a variety of mirror sizes. We present measurements of surface quality at a range of temperatures characteristic of mountaintop observatories. For the 8 cm piece, the figure error of the Al-coated reflective surface under best actuator correction is {approx}43 nm RMS. The 8 cm mirror has a low surface error internal to the outer ring of actuators (17 nm RMS at 20 C and 33 nm RMS at -5 C). Surface roughness is low (< 3 nm P-V) at a variety of temperatures. We present new figure quality measurements of the larger 17 cm mirror, showing that the intra-actuator figure error internal to the outer ring of actuators (38 nm RMS surface with one-third the actuator density of the 8 cm mirror) does not scale sharply with mirror diameter.

  17. Adaptive Liver Stereotactic Body Radiation Therapy: Automated Daily Plan Reoptimization Prevents Dose Delivery Degradation Caused by Anatomy Deformations

    SciTech Connect

    Leinders, Suzanne M.; Breedveld, Sebastiaan; Méndez Romero, Alejandra; Schaart, Dennis; Seppenwoolde, Yvette; Heijmen, Ben J.M.

    2013-12-01

    Purpose: To investigate how dose distributions for liver stereotactic body radiation therapy (SBRT) can be improved by using automated, daily plan reoptimization to account for anatomy deformations, compared with setup corrections only. Methods and Materials: For 12 tumors, 3 strategies for dose delivery were simulated. In the first strategy, computed tomography scans made before each treatment fraction were used only for patient repositioning before dose delivery for correction of detected tumor setup errors. In adaptive second and third strategies, in addition to the isocenter shift, intensity modulated radiation therapy beam profiles were reoptimized or both intensity profiles and beam orientations were reoptimized, respectively. All optimizations were performed with a recently published algorithm for automated, multicriteria optimization of both beam profiles and beam angles. Results: In 6 of 12 cases, violations of organs at risk (ie, heart, stomach, kidney) constraints of 1 to 6 Gy in single fractions occurred in cases of tumor repositioning only. By using the adaptive strategies, these could be avoided (<1 Gy). For 1 case, this needed adaptation by slightly underdosing the planning target volume. For 2 cases with restricted tumor dose in the planning phase to avoid organ-at-risk constraint violations, fraction doses could be increased by 1 and 2 Gy because of more favorable anatomy. Daily reoptimization of both beam profiles and beam angles (third strategy) performed slightly better than reoptimization of profiles only, but the latter required only a few minutes of computation time, whereas full reoptimization took several hours. Conclusions: This simulation study demonstrated that replanning based on daily acquired computed tomography scans can improve liver stereotactic body radiation therapy dose delivery.

  18. Aeroelastic Deformation: Adaptation of Wind Tunnel Measurement Concepts to Full-Scale Vehicle Flight Testing

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W.; Lokos, William A.; Barrows, Danny A.

    2005-01-01

    The adaptation of a proven wind tunnel test technique, known as Videogrammetry, to flight testing of full-scale vehicles is presented. A description is presented of the technique used at NASA's Dryden Flight Research Center for the measurement of the change in wing twist and deflection of an F/A-18 research aircraft as a function of both time and aerodynamic load. Requirements for in-flight measurements are compared and contrasted with those for wind tunnel testing. The methodology for the flight-testing technique and differences compared to wind tunnel testing are given. Measurement and operational comparisons to an older in-flight system known as the Flight Deflection Measurement System (FDMS) are presented.

  19. Random walks with efficient search and contextually adapted image similarity for deformable registration.

    PubMed

    Tang, Lisa Y W; Hamarneh, Ghassan

    2013-01-01

    We develop a random walk-based image registration method that incorporates two novelties: 1) a progressive optimization scheme that conducts the solution search efficiently via a novel use of information derived from the obtained probabilistic solution, and 2) a data-likelihood re-weighting step that contextually performs feature selection in a spatially adaptive manner so that the data costs are based primarily on trusted information sources. Synthetic experiments on three public datasets of different anatomical regions and modalities showed that our method performed efficient search without sacrificing registration accuracy. Experiments performed on 60 real brain image pairs from a public dataset also demonstrated our method's better performance over existing non-probabilistic image registration methods.

  20. Nichtrigide Bildregistrierung für die adaptive Strahlentherapie mittels Free Form Deformation

    NASA Astrophysics Data System (ADS)

    Wurst, Gernot; Bendl, Rolf

    Im Rahmen der Adaptiven Strahlentherapie müssen zum Behandlungszeitpunkt Abweichungen der aktuellen Patientengeometrie von den Planungsdaten bekannt sein, damit eine Anpassung des Bestrahlungsplans vorgenommen werden kann. State of the Art ist in diesem Zusammenhang die rigide Registrierung von Planungs- und Kontrolldaten. Hierbei werden jedoch komplexere, nichtrigide Deformationen nicht angemessen berücksichtigt. Daher wurde ein Verfahren entwickelt, das diese komplexen Deformationen durch ein Free Form-Deformationsmodell beschreibt. Die dafür vorzugebenden Translationsvektoren wurden durch Template Matching bestimmt. Es zeigte sich, dass die vorhandenen Deformationen weitgehend erkannt werden. Weiterhin ist das Verfahren aufgrund seines günstigen Laufzeitverhaltens prädestiniert für die Adaptive Strahlentherapie.

  1. Exact and Adaptive Signed Distance Fields Computation for Rigid and Deformable Models on GPUs.

    PubMed

    Liu, Fuchang; Kim, Young J

    2014-05-01

    Most techniques for real-time construction of a signed distance field, whether on a CPU or GPU, involve approximate distances. We use a GPU to build an exact adaptive distance field, constructed from an octree by using the Morton code. We use rectangle-swept spheres to construct a bounding volume hierarchy (BVH) around a triangulated model. To speed up BVH construction, we can use a multi-BVH structure to improve the workload balance between GPU processors. An upper bound on distance to the model provided by the octree itself allows us to reduce the number of BVHs involved in determining the distances from the centers of octree nodes at successively lower levels, prior to an exact distance query involving the remaining BVHs. Distance fields can be constructed 35-64 times as fast as a serial CPU implementation of a similar algorithm, allowing us to simulate a piece of fabric interacting with the Stanford Bunny at 20 frames per second.

  2. MO-C-17A-13: Uncertainty Evaluation of CT Image Deformable Registration for H and N Cancer Adaptive Radiotherapy

    SciTech Connect

    Qin, A; Yan, D

    2014-06-15

    Purpose: To evaluate uncertainties of organ specific Deformable Image Registration (DIR) for H and N cancer Adaptive Radiation Therapy (ART). Methods: A commercial DIR evaluation tool, which includes a digital phantom library of 8 patients, and the corresponding “Ground truth Deformable Vector Field” (GT-DVF), was used in the study. Each patient in the phantom library includes the GT-DVF created from a pair of CT images acquired prior to and at the end of the treatment course. Five DIR tools, including 2 commercial tools (CMT1, CMT2), 2 in-house (IH-FFD1, IH-FFD2), and a classic DEMON algorithms, were applied on the patient images. The resulting DVF was compared to the GT-DVF voxel by voxel. Organ specific DVF uncertainty was calculated for 10 ROIs: Whole Body, Brain, Brain Stem, Cord, Lips, Mandible, Parotid, Esophagus and Submandibular Gland. Registration error-volume histogram was constructed for comparison. Results: The uncertainty is relatively small for brain stem, cord and lips, while large in parotid and submandibular gland. CMT1 achieved best overall accuracy (on whole body, mean vector error of 8 patients: 0.98±0.29 mm). For brain, mandible, parotid right, parotid left and submandibular glad, the classic Demon algorithm got the lowest uncertainty (0.49±0.09, 0.51±0.16, 0.46±0.11, 0.50±0.11 and 0.69±0.47 mm respectively). For brain stem, cord and lips, the DVF from CMT1 has the best accuracy (0.28±0.07, 0.22±0.08 and 0.27±0.12 mm respectively). All algorithms have largest right parotid uncertainty on patient #7, which has image artifact caused by tooth implantation. Conclusion: Uncertainty of deformable CT image registration highly depends on the registration algorithm, and organ specific. Large uncertainty most likely appears at the location of soft-tissue organs far from the bony structures. Among all 5 DIR methods, the classic DEMON and CMT1 seem to be the best to limit the uncertainty within 2mm for all OARs. Partially supported by

  3. A comparison of tetrahedral mesh improvement techniques

    SciTech Connect

    Freitag, L.A.; Ollivier-Gooch, C.

    1996-12-01

    Automatic mesh generation and adaptive refinement methods for complex three-dimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less accurate and more difficult to compute. Fortunately, the shape of the elements can be improved through several mechanisms, including face-swapping techniques that change local connectivity and optimization-based mesh smoothing methods that adjust grid point location. The authors consider several criteria for each of these two methods and compare the quality of several meshes obtained by using different combinations of swapping and smoothing. Computational experiments show that swapping is critical to the improvement of general mesh quality and that optimization-based smoothing is highly effective in eliminating very small and very large angles. The highest quality meshes are obtained by using a combination of swapping and smoothing techniques.

  4. Design of a Compact, Bimorph Deformable Mirror-Based Adaptive Optics Scanning Laser Ophthalmoscope.

    PubMed

    He, Yi; Deng, Guohua; Wei, Ling; Li, Xiqi; Yang, Jinsheng; Shi, Guohua; Zhang, Yudong

    2016-01-01

    We have designed, constructed and tested an adaptive optics scanning laser ophthalmoscope (AOSLO) using a bimorph mirror. The simulated AOSLO system achieves diffraction-limited criterion through all the raster scanning fields (6.4 mm pupil, 3° × 3° on pupil). The bimorph mirror-based AOSLO corrected ocular aberrations in model eyes to less than 0.1 μm RMS wavefront error with a closed-loop bandwidth of a few Hz. Facilitated with a bimorph mirror at a stroke of ±15 μm with 35 elements and an aperture of 20 mm, the new AOSLO system has a size only half that of the first-generation AOSLO system. The significant increase in stroke allows for large ocular aberrations such as defocus in the range of ±600° and astigmatism in the range of ±200°, thereby fully exploiting the AO correcting capabilities for diseased human eyes in the future.

  5. An Adaptive Flow Solver for Air-Borne Vehicles Undergoing Time-Dependent Motions/Deformations

    NASA Technical Reports Server (NTRS)

    Singh, Jatinder; Taylor, Stephen

    1997-01-01

    This report describes a concurrent Euler flow solver for flows around complex 3-D bodies. The solver is based on a cell-centered finite volume methodology on 3-D unstructured tetrahedral grids. In this algorithm, spatial discretization for the inviscid convective term is accomplished using an upwind scheme. A localized reconstruction is done for flow variables which is second order accurate. Evolution in time is accomplished using an explicit three-stage Runge-Kutta method which has second order temporal accuracy. This is adapted for concurrent execution using another proven methodology based on concurrent graph abstraction. This solver operates on heterogeneous network architectures. These architectures may include a broad variety of UNIX workstations and PCs running Windows NT, symmetric multiprocessors and distributed-memory multi-computers. The unstructured grid is generated using commercial grid generation tools. The grid is automatically partitioned using a concurrent algorithm based on heat diffusion. This results in memory requirements that are inversely proportional to the number of processors. The solver uses automatic granularity control and resource management techniques both to balance load and communication requirements, and deal with differing memory constraints. These ideas are again based on heat diffusion. Results are subsequently combined for visualization and analysis using commercial CFD tools. Flow simulation results are demonstrated for a constant section wing at subsonic, transonic, and a supersonic case. These results are compared with experimental data and numerical results of other researchers. Performance results are under way for a variety of network topologies.

  6. MeshEZW: an image coder using mesh and finite elements

    NASA Astrophysics Data System (ADS)

    Landais, Thomas; Bonnaud, Laurent; Chassery, Jean-Marc

    2003-08-01

    In this paper, we present a new method to compress the information in an image, called MeshEZW. The proposed approach is based on the finite elements method, a mesh construction and a zerotree method. The zerotree method is an adaptive of the EZW algorithm with two new symbols for increasing the performance. These steps allow a progressive representation of the image by the automatic construction of a bitstream. The mesh structure is adapted to the image compression domain and is defined to allow video comrpession. The coder is described and some preliminary results are discussed.

  7. Caracterisation de la cohesion de l'interface AMF/polymere dans une structure deformable adaptative

    NASA Astrophysics Data System (ADS)

    Fischer-Rousseau, Charles

    Les structures déformables adaptatives (SDA) sont appelées à jouer un rôle important en aéronautique entre autres. Les alliages à mémoire de forme (AMF) sont un des candidats les plus prometteurs. Beaucoup de travail reste toutefois à faire avant que ces structures rencontrent les exigences élevées reliées à leur intégration dans un contexte aéronautique. Des travaux de recherche ont montré que la résistance à la décohésion de l’interface AMF/polymère peut être un élément limitant dans la performance des SDA. Dans ce travail, l’effet sur la résistance à la décohésion de l’interface AMF/polymère de divers traitements de surface, géométries de fil et types de polymère est évalué. La géométrie du fil est modifiée par une combinaison spécifique de laminage à froid et de recuit postdéformation qui maintient les propriétés de mémoire de forme tout en permettant de réduire l’aire de la section transversale du fil. Le traitement thermomécanique le plus prometteur est proposé. Une nouvelle méthode d’évaluation de la résistance à la décohésion est développée. Plutôt que de tester les fils en arrachement et de mesurer la force maximale, les tests en contraction sont basés sur la capacité des fils d’AMF à se contracter s’ils ont été encastrés dans un état tiré et qu’ils sont chauffés par effet Joule. L’hypothèse qu’on pose est que ces tests sont une meilleure approximation des conditions rencontrées dans une SDA, où les fils se contractent plutôt qu’ils sont arrachés par une force externe à la structure. Bien qu’une décohésion partielle ait été observée pour tous les échantillons, l’aire de la surface où il y a décohésion tait plus grande pour les échantillons avec une pré-déformation plus grande. Le front de décohésion a semblé cesser de progresser après les cycles de chauffage initiaux lorsque la vitesse de chauffage était faible. Un modèle numérique simulant la

  8. First light of the deformable secondary mirror-based adaptive optics system on 1.8m telescope

    NASA Astrophysics Data System (ADS)

    Guo, Youming; Zhang, Ang; Fan, Xinlong; Rao, Changhui; Wei, Ling; Xian, Hao; Wei, Kai; Zhang, Xiaojun; Guan, Chunlin; Li, Min; Zhou, Luchun; Jin, Kai; Zhang, Junbo; Zhou, Longfeng; Zhang, Xuejun; Zhang, Yudong

    2016-07-01

    An adaptive optics system (AOS), which consists of a 73-element piezoelectric deformable secondary mirror (DSM), a 9x9 Shack-Hartmann wavefront sensor and a real time controller has been integrated on the 1.8m telescope at the Gaomeigu site of Yunnan Astronomical Observatory, Chinese Academy of Sciences. Compared to the traditional AOS on Coude focus, the DSM AOS adopts much less reflections and consequently restrains the thermal noise and increases the energy transmitting to the system. Before the first on-sky test, this system has been demonstrated in the laboratory by compensating the simulated atmospheric turbulence generated by a rotating phase screen. A new multichannel-modulation calibration method which is used to measure the DSM based AOS interaction matrix is proposed. After integration on the 1.8m telescope, the closed-loop compensation of the atmospheric turbulence with the DSM based AOS is achieved, and the first light results from the on-sky experiment are reported.

  9. Dynamic performance of microelectromechanical systems deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Archer-Zhang, Christian Chunzi; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-12-01

    Active optics such as deformable mirrors can be used to control both focal depth and aberrations during scanning laser microscopy. If the focal depth can be changed dynamically during scanning, then imaging of oblique surfaces becomes possible. If aberrations can be corrected dynamically during scanning, an image can be optimized throughout the field of view. Here, we characterize the speed and dynamic precision of a Boston Micromachines Corporation Multi-DM 140 element aberration correction mirror and a Revibro Optics 4-zone focus control mirror to assess suitability for use in an active and adaptive two-photon microscope. Tests for the multi-DM include both step response and sinusoidal frequency sweeps of specific Zernike modes (defocus, spherical aberration, coma, astigmatism, and trefoil). We find wavefront error settling times for mode amplitude steps as large as 400 nm to be less than 52 μs, with 3 dB frequencies ranging from 6.5 to 10 kHz. The Revibro Optics mirror was tested for step response only, with wavefront error settling time less than 80 μs for defocus steps up to 3000 nm, and less than 45 μs for spherical aberration steps up to 600 nm. These response speeds are sufficient for intrascan correction at scan rates typical of two-photon microscopy.

  10. 3D-Meshes aus medizinischen Volumendaten

    NASA Astrophysics Data System (ADS)

    Zelzer, Sascha; Meinzer, Hans-Peter

    Diese Arbeit beschreibt eine template-basierte Methode zur Erzeugung von adaptiven Hexaeder-Meshes aus Volumendaten, welche komplizierte konkave Strukturen aufweisen können. Es wird ein vollständiger Satz von Templates generiert der es erlaubt, die Ränder konkaver Regionen feiner zu zerlegen als angrenzende Bereiche und somit die Gesamtzahl an Hexaeder verringert. Der Algorithmus arbeitet mit beliebigen gelabelten Volumendaten und erzeugt ein adaptives, konformes, reines Hexaeder-Mesh.

  11. Toward adaptive radiotherapy for head and neck patients: Uncertainties in dose warping due to the choice of deformable registration algorithm

    SciTech Connect

    Veiga, Catarina Royle, Gary; Lourenço, Ana Mónica; Mouinuddin, Syed; Herk, Marcel van; Modat, Marc; Ourselin, Sébastien; McClelland, Jamie R.

    2015-02-15

    Purpose: The aims of this work were to evaluate the performance of several deformable image registration (DIR) algorithms implemented in our in-house software (NiftyReg) and the uncertainties inherent to using different algorithms for dose warping. Methods: The authors describe a DIR based adaptive radiotherapy workflow, using CT and cone-beam CT (CBCT) imaging. The transformations that mapped the anatomy between the two time points were obtained using four different DIR approaches available in NiftyReg. These included a standard unidirectional algorithm and more sophisticated bidirectional ones that encourage or ensure inverse consistency. The forward (CT-to-CBCT) deformation vector fields (DVFs) were used to propagate the CT Hounsfield units and structures to the daily geometry for “dose of the day” calculations, while the backward (CBCT-to-CT) DVFs were used to remap the dose of the day onto the planning CT (pCT). Data from five head and neck patients were used to evaluate the performance of each implementation based on geometrical matching, physical properties of the DVFs, and similarity between warped dose distributions. Geometrical matching was verified in terms of dice similarity coefficient (DSC), distance transform, false positives, and false negatives. The physical properties of the DVFs were assessed calculating the harmonic energy, determinant of the Jacobian, and inverse consistency error of the transformations. Dose distributions were displayed on the pCT dose space and compared using dose difference (DD), distance to dose difference, and dose volume histograms. Results: All the DIR algorithms gave similar results in terms of geometrical matching, with an average DSC of 0.85 ± 0.08, but the underlying properties of the DVFs varied in terms of smoothness and inverse consistency. When comparing the doses warped by different algorithms, we found a root mean square DD of 1.9% ± 0.8% of the prescribed dose (pD) and that an average of 9% ± 4% of

  12. Evaluation of skin and muscular deformations in a non-rigid motion analysis

    NASA Astrophysics Data System (ADS)

    Goffredo, Michela; Carli, Marco; Conforto, Silvia; Bibbo, Daniele; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    During contraction and stretching, muscles change shape and size, and produce a deformation of skin tissues and a modification of the body segment shape. In human motion analysis, it is indispensable to take into account this phenomenon and thus approximating body limbs to rigid structures appears as restrictive. The present work aims at evaluating skin and muscular deformation, and at modeling body segment elastic behavior by analysing video sequences that capture a sport gesture. The soft tissue modeling is accomplished by using triangular meshes that automatically adapt to the body segment during the execution of a static muscle contraction. The adaptive triangular mesh is built on reference points whose motion is estimated by using the technique based on Gauss Laguerre Expansion. Promising results have been obtained by applying the proposed method to a video sequence, where an upper arm isometric contraction was present.

  13. Dynamic performance of MEMS deformable mirrors for use in an active/adaptive two-photon microscope

    NASA Astrophysics Data System (ADS)

    Zhang, Christian C.; Foster, Warren B.; Downey, Ryan D.; Arrasmith, Christopher L.; Dickensheets, David L.

    2016-03-01

    Active optics can facilitate two-photon microscopic imaging deep in tissue. We are investigating fast focus control mirrors used in concert with an aberration correction mirror to control the axial position of focus and system aberrations dynamically during scanning. With an adaptive training step, sample-induced aberrations may be compensated as well. If sufficiently fast and precise, active optics may be able to compensate under-corrected imaging optics as well as sample aberrations to maintain diffraction-limited performance throughout the field of view. Toward this end we have measured a Boston Micromachines Corporation Multi-DM 140 element deformable mirror, and a Revibro Optics electrostatic 4-zone focus control mirror to characterize dynamic performance. Tests for the Multi-DM included both step response and sinusoidal frequency sweeps of specific Zernike modes. For the step response we measured 10%-90% rise times for the target Zernike amplitude, and wavefront rms error settling times. Frequency sweeps identified the 3dB bandwidth of the mirror when attempting to follow a sinusoidal amplitude trajectory for a specific Zernike mode. For five tested Zernike modes (defocus, spherical aberration, coma, astigmatism and trefoil) we find error settling times for mode amplitudes up to 400nm to be less than 52 us, and 3 dB frequencies range from 6.5 kHz to 10 kHz. The Revibro Optics mirror was tested for step response only, with error settling time of 80 μs for a large 3 um defocus step, and settling time of only 18 μs for a 400nm spherical aberration step. These response speeds are sufficient for intra-scan correction at scan rates typical of two-photon microscopy.

  14. Earth As An Unstructured Mesh and Its Recovery from Seismic Waveform Data

    NASA Astrophysics Data System (ADS)

    De Hoop, M. V.

    2015-12-01

    We consider multi-scale representations of Earth's interior from thepoint of view of their possible recovery from multi- andhigh-frequency seismic waveform data. These representations areintrinsically connected to (geologic, tectonic) structures, that is,geometric parametrizations of Earth's interior. Indeed, we address theconstruction and recovery of such parametrizations using localiterative methods with appropriately designed data misfits andguaranteed convergence. The geometric parametrizations containinterior boundaries (defining, for example, faults, salt bodies,tectonic blocks, slabs) which can, in principle, be obtained fromsuccessive segmentation. We make use of unstructured meshes. For the adaptation and recovery of an unstructured mesh we introducean energy functional which is derived from the Hausdorff distance. Viaan augmented Lagrangian method, we incorporate the mentioned datamisfit. The recovery is constrained by shape optimization of theinterior boundaries, and is reminiscent of Hausdorff warping. We useelastic deformation via finite elements as a regularization whilefollowing a two-step procedure. The first step is an update determinedby the energy functional; in the second step, we modify the outcome ofthe first step where necessary to ensure that the new mesh isregular. This modification entails an array of techniques includingtopology correction involving interior boundary contacting andbreakup, edge warping and edge removal. We implement this as afeed-back mechanism from volume to interior boundary meshesoptimization. We invoke and apply a criterion of mesh quality controlfor coarsening, and for dynamical local multi-scale refinement. Wepresent a novel (fluid-solid) numerical framework based on theDiscontinuous Galerkin method.

  15. Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy

    2006-01-01

    This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.

  16. Evolution of the mandibular mesh implant.

    PubMed

    Salyer, K E; Johns, D F; Holmes, R E; Layton, J G

    1977-07-01

    Between 1960 and 1972, the Dallas Veterans Administration Hospital Maxillofacial Research Laboratory developed and made over 150 cast-mesh implants. Successive designs were ovoid, circular, and double-lumened in cross section to improve implant strength, surface area for bioattachment, and adjustability. Sleeves, collars, and bows were employed in the assembly of these implants, with an acrylic condylar head attached when indicated. In 1972, our laboratory developed a mandibular mesh tray, cast in one piece on a single sprue, with preservation of the vertically adjustable ramus. Stainless steel replaced Vitallium because of its greater malleability. Essentially, a lost-wax technique is used to cast the mesh tray. The model of a mandibular segment is duplicated as a refractory model. Mesh wax, made in our own custom-made die, is adapted to the refractory model. The unit is then sprued and invested. The wax is fired our of the mold in a gas furnace. Casting is done by the transferral of molten stainless steel from the crucible to the mold by centrifugal force in an electro-induction casting machine. Other mesh implants that have been developed are made from wire mesh, Dacron mesh, cast Ticonium, and hydroformed titanium.

  17. A peeling mesh.

    PubMed

    Bohmer, R D; Byrne, P D; Maddern, G J

    2002-07-01

    A number of different materials are available for incisional hernia repair. Benefits of the various types are controversial and are partly dependent on the anatomical placement of the mesh. Composite mesh has been introduced to provide tissue ingrowth for strength and a non-adherent side to protect the bowel, these layers being laminated together. This report is on the separation of layers in an infected mesh and adherence of the expanded polytetrafluoroethylene layer to the small bowel.

  18. Lagrange-mesh calculations in momentum space.

    PubMed

    Lacroix, Gwendolyn; Semay, Claude; Buisseret, Fabien

    2012-08-01

    The Lagrange-mesh method is a powerful method to solve eigenequations written in configuration space. It is very easy to implement and very accurate. Using a Gauss quadrature rule, the method requires only the evaluation of the potential at some mesh points. The eigenfunctions are expanded in terms of regularized Lagrange functions which vanish at all mesh points except one. It is shown that this method can be adapted to solve eigenequations written in momentum space, keeping the convenience and the accuracy of the original technique. In particular, the kinetic operator is a diagonal matrix. Observables and wave functions in both configuration space and momentum space can also be easily computed with good accuracy using only eigenfunctions computed in the momentum space. The method is tested with Gaussian and Yukawa potentials, requiring, respectively, a small and a large mesh to reach convergence. Corresponding wave functions in both spaces are compared with each other using the Fourier transform.

  19. A finite-element mesh generator based on growing neural networks.

    PubMed

    Triantafyllidis, D G; Labridis, D P

    2002-01-01

    A mesh generator for the production of high-quality finite-element meshes is being proposed. The mesh generator uses an artificial neural network, which grows during the training process in order to adapt itself to a prespecified probability distribution. The initial mesh is a constrained Delaunay triangulation of the domain to be triangulated. Two new algorithms to accelerate the location of the best matching unit are introduced. The mesh generator has been found able to produce meshes of high quality in a number of classic cases examined and is highly suited for problems where the mesh density vector can be calculated in advance.

  20. A voxel-based finite element model for the prediction of bladder deformation

    SciTech Connect

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.; Bel, Arjan

    2012-01-15

    manual contours and <0.02 cm difference in mean standard deviation of residual errors). The average equation solving time (without manual intervention) for the first two types of hexahedral meshes increased to 2.3 h and 2.6 h compared to the 1.1 h needed for the tetrahedral mesh, however, the low-resolution nonuniform hexahedral mesh dramatically decreased the equation solving time to 3 min without reducing accuracy. Conclusions: Voxel-based mesh generation allows fast, automatic, and robust creation of finite element bladder models directly from binary segmentation images without user intervention. Even the low-resolution voxel-based hexahedral mesh yields comparable accuracy in bladder shape prediction and more than 20 times faster in computational speed compared to the tetrahedral mesh. This approach makes it more feasible and accessible to apply FE method to model bladder deformation in adaptive radiotherapy.

  1. Automatic finite-element mesh generation using artificial neural networks. Part 1: Prediction of mesh density

    SciTech Connect

    Chedid, R.; Najjar, N.

    1996-09-01

    One of the inconveniences associated with the existing finite-element packages is the need for an educated user to develop a correct mesh at the preprocessing level. Procedures which start with a coarse mesh and attempt serious refinements, as is the case in most adaptive finite-element packages, are time consuming and costly. Hence, it is very important to develop a tool that can provide a mesh that either leads immediately to an acceptable solution, or would require fewer correcting steps to achieve better results. In this paper, the authors present a technique for automatic mesh generation based on artificial neural networks (ANN). The essence of this technique is to predict the mesh density distribution of a given model, and then supply this information to a Kohonen neural network which provides the final mesh. Prediction of mesh density is accomplished by a simple feedforward neural network which has the ability to learn the relationship between mesh density and model geometric features. It will be shown that ANN are able to recognize delicate areas where a sharp variation of the magnetic field is expected. Examples of 2-D models are provided to illustrate the usefulness of the proposed technique.

  2. A study of the anatomic changes and dosimetric consequences in adaptive CRT of non-small-cell lung cancer using deformable CT and CBCT image registration.

    PubMed

    Ma, Changsheng; Hou, Yong; Li, Hongsheng; Li, Dengwang; Zhang, Yingjie; Chen, Siye; Yin, Yong

    2014-04-01

    The aim of this study is to evaluate anatomic lung tumor changes and dosimetric consequences utilizing the deformable daily kilovolt (KV) cone-beam computer tomography (CBCT) image registration. Five patients diagnosed with NSCLC were treated with three-dimensional conformal radiotherapy (3D CRT) and 10 daily KV CBCT image sets were acquired for each patient. Each CBCT image and plan CT were imported into the deformable image registration (DIR) system. The plan CT image was deformed by the DIR system and a new contour on CBCT was obtained by using the auto-contouring function of the DIR. These contours were individually marked as CBCT f1, CBCT f2,..., and CBCT f10, and imported into a treatment planning system (TPS). The daily CBCT plan was individually generated with the same planning criteria based on new contours. These plans were individually marked as CBCTp1, CBCTp2,..., and CBCTp10, followed by generating a dose accumulation plan (DA plan) in original pCT image contour sets by adding all CBCT plans using Varian Eclipse TPS. The maximum, minimum and mean doses to the plan target volume (PTV) in the 5 DA plans were the same with the CT plans. However, the volume of radiation 5, 10, 20, 30, and 50 Gy of the total lungs in DA plans were less than those of the CT plans. The maximum dose of the spinal cord in the DA plans were average 27.96% less than the CT plans. The mean dose for the left, right, and total lungs in the DA plans were reduced by 13.80%, 23.65%, and 12.96%, respectively. The adaptive 3D CRT based on the deformable registration can reduce the dose to the lung and the spinal cord with the same PTV dose coverage. Moreover, it provides a method for further adaptive radiotherapy exploration.

  3. An adaptive multiblock high-order finite-volume method for solving the shallow-water equations on the sphere

    DOE PAGES

    McCorquodale, Peter; Ullrich, Paul; Johansen, Hans; ...

    2015-09-04

    We present a high-order finite-volume approach for solving the shallow-water equations on the sphere, using multiblock grids on the cubed-sphere. This approach combines a Runge--Kutta time discretization with a fourth-order accurate spatial discretization, and includes adaptive mesh refinement and refinement in time. Results of tests show fourth-order convergence for the shallow-water equations as well as for advection in a highly deformational flow. Hierarchical adaptive mesh refinement allows solution error to be achieved that is comparable to that obtained with uniform resolution of the most refined level of the hierarchy, but with many fewer operations.

  4. Dosimetric and geometric evaluation of the use of deformable image registration in adaptive intensity-modulated radiotherapy for head-and-neck cancer

    PubMed Central

    Eiland, R.B.; Maare, C.; Sjöström, D.; Samsøe, E.; Behrens, C.F.

    2014-01-01

    The aim of this study was to carry out geometric and dosimetric evaluation of the usefulness of a deformable image registration algorithm utilized for adaptive head-and-neck intensity-modulated radiotherapy. Data consisted of seven patients, each with a planning CT (pCT), a rescanning CT (ReCT) and a cone beam CT (CBCT). The CBCT was acquired on the same day (±1 d) as the ReCT (i.e. at Fraction 17, 18, 23, 24 or 29). The ReCT served as ground truth. A deformed CT (dCT) with structures was created by deforming the pCT to the CBCT. The geometrical comparison was based on the volumes of the deformed, and the manually delineated structures on the ReCT. Likewise, the center of mass shift (CMS) and the Dice similarity coefficient were determined. The dosimetric comparison was performed by recalculating the initial treatment plan on the dCT and the ReCT. Dose–volume histogram (DVH) points and a range of conformity measures were used for the evaluation. We found a significant difference in the median volume of the dCT relative to that of the ReCT. Median CMS values were ∼2–5 mm, except for the spinal cord, where the median CMS was 8 mm. Dosimetric evaluation of target structures revealed small differences, while larger differences were observed for organs at risk. The deformed structures cannot fully replace manually delineated structures. Based on both geometrical and dosimetrical measures, there is a tendency for the dCT to overestimate the need for replanning, compared with the ReCT. PMID:24907340

  5. Prolene mesh mentoplasty.

    PubMed

    Ilhan, A Emre; Kayabasoglu, Gurkan; Kazikdas, K Cagdas; Goksel, Abdulkadir

    2011-04-01

    Augmentation mentoplasty is a cosmetic surgical procedure to correct chin retrusion or microgenia which usually requires placement of an alloplastic material over the pogonion, and which results in increased chin projection and a more aesthetically balanced facial profile. Polypropylene mesh is easy to purchase, widely available in a general hospital and most commonly used by general surgeons. In this series of 192 patients, we wanted to demonstrate our simple mentoplasty technique using prolene mesh that can easily be combined with a rhinoplasty procedure, with possible causes of infection and the rationale for using prolene mesh in such procedures.

  6. Optimized testing of meshes

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Ozden, Banu

    1990-01-01

    Efficient testing techniques for two-dimensional mesh interconnection networks are presented. The tests cover faults in the arbitration logic of the switches; this includes an examination of fault detection in the data paths, routing, and control circuitry, including the conflict resolution capabilities of mesh interconnection networks using topological test methods. The proposed methods are not implementation specific and can be applied to any design with a mesh topology. The topology and behavior of the network are described and definitions are presented. The fault model is defined and parallel testing methods for the entire network are given.

  7. NON-CONFORMING FINITE ELEMENTS; MESH GENERATION, ADAPTIVITY AND RELATED ALGEBRAIC MULTIGRID AND DOMAIN DECOMPOSITION METHODS IN MASSIVELY PARALLEL COMPUTING ENVIRONMENT

    SciTech Connect

    Lazarov, R; Pasciak, J; Jones, J

    2002-02-01

    Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed.

  8. A study on moving mesh finite element solution of the porous medium equation

    NASA Astrophysics Data System (ADS)

    Ngo, Cuong; Huang, Weizhang

    2017-02-01

    An adaptive moving mesh finite element method is studied for the numerical solution of the porous medium equation with and without variable exponents and absorption. The method is based on the moving mesh partial differential equation approach and employs its newly developed implementation. The implementation has several improvements over the traditional one, including its explicit, compact form of the mesh velocities, ease to program, and less likelihood of producing singular meshes. Three types of metric tensor that correspond to uniform and arclength-based and Hessian-based adaptive meshes are considered. The method shows first-order convergence for uniform and arclength-based adaptive meshes, and second-order convergence for Hessian-based adaptive meshes. It is also shown that the method can be used for situations with complex free boundaries, emerging and splitting of free boundaries, and the porous medium equation with variable exponents and absorption. Two-dimensional numerical results are presented.

  9. Mesh implants: An overview of crucial mesh parameters

    PubMed Central

    Zhu, Lei-Ming; Schuster, Philipp; Klinge, Uwe

    2015-01-01

    Hernia repair is one of the most frequently performed surgical interventions that use mesh implants. This article evaluates crucial mesh parameters to facilitate selection of the most appropriate mesh implant, considering raw materials, mesh composition, structure parameters and mechanical parameters. A literature review was performed using the PubMed database. The most important mesh parameters in the selection of a mesh implant are the raw material, structural parameters and mechanical parameters, which should match the physiological conditions. The structural parameters, especially the porosity, are the most important predictors of the biocompatibility performance of synthetic meshes. Meshes with large pores exhibit less inflammatory infiltrate, connective tissue and scar bridging, which allows increased soft tissue ingrowth. The raw material and combination of raw materials of the used mesh, including potential coatings and textile design, strongly impact the inflammatory reaction to the mesh. Synthetic meshes made from innovative polymers combined with surface coating have been demonstrated to exhibit advantageous behavior in specialized fields. Monofilament, large-pore synthetic meshes exhibit advantages. The value of mesh classification based on mesh weight seems to be overestimated. Mechanical properties of meshes, such as anisotropy/isotropy, elasticity and tensile strength, are crucial parameters for predicting mesh performance after implantation. PMID:26523210

  10. Urogynecologic Surgical Mesh Implants

    MedlinePlus

    ... urogynecologic repair. Absorbable mesh will degrade and lose strength over time. It is not intended to provide long-term reinforcement to the repair site. As the material degrades, new tissue growth is intended to provide ...

  11. Hernia Surgical Mesh Implants

    MedlinePlus

    ... repaired hernia. Absorbable mesh will degrade and lose strength over time. It is not intended to provide long-term reinforcement to the repair site. As the material degrades, new tissue growth is intended to provide ...

  12. 2D Mesh Manipulation

    DTIC Science & Technology

    2011-11-01

    PLATE A two-dimensional flat plate mesh was created using the Gridgen software package (Ref. 13). This mesh (shown in Fig. 10) closely resembled a...desired tolerance of the projection onto the surface. The geometry file on which the geometry surface is based can be easily generated using Gridgen ...by exporting a curve (or number of curves) under the INPUT/OUTPUT commands in the Gridgen interface (Ref. 13). Initially, the floating boundary

  13. Wireless Mesh Networks

    NASA Astrophysics Data System (ADS)

    Ishmael, Johnathan; Race, Nicholas

    Wireless Mesh Networks have emerged as an important technology in building next-generation networks. They are seen to have a range of benefits over traditional wired and wireless networks including low deployment costs, high scalability and resiliency to faults. Moreover, Wireless Mesh Networks (WMNs) are often described as being autonomic with self-* (healing and configuration) properties and their popularity has grown both as a research platform and as a commercially exploitable technology.

  14. A three-dimensional head-and-neck phantom for validation of multimodality deformable image registration for adaptive radiotherapy

    SciTech Connect

    Singhrao, Kamal; Kirby, Neil; Pouliot, Jean

    2014-12-15

    Purpose: To develop a three-dimensional (3D) deformable head-and-neck (H and N) phantom with realistic tissue contrast for both kilovoltage (kV) and megavoltage (MV) imaging modalities and use it to objectively evaluate deformable image registration (DIR) algorithms. Methods: The phantom represents H and N patient anatomy. It is constructed from thermoplastic, which becomes pliable in boiling water, and hardened epoxy resin. Using a system of additives, the Hounsfield unit (HU) values of these materials were tuned to mimic anatomy for both kV and MV imaging. The phantom opens along a sagittal midsection to reveal radiotransparent markers, which were used to characterize the phantom deformation. The deformed and undeformed phantoms were scanned with kV and MV imaging modalities. Additionally, a calibration curve was created to change the HUs of the MV scans to be similar to kV HUs, (MC). The extracted ground-truth deformation was then compared to the results of two commercially available DIR algorithms, from Velocity Medical Solutions and MIM software. Results: The phantom produced a 3D deformation, representing neck flexion, with a magnitude of up to 8 mm and was able to represent tissue HUs for both kV and MV imaging modalities. The two tested deformation algorithms yielded vastly different results. For kV–kV registration, MIM produced mean and maximum errors of 1.8 and 11.5 mm, respectively. These same numbers for Velocity were 2.4 and 7.1 mm, respectively. For MV–MV, kV–MV, and kV–MC Velocity produced similar mean and maximum error values. MIM, however, produced gross errors for all three of these scenarios, with maximum errors ranging from 33.4 to 41.6 mm. Conclusions: The application of DIR across different imaging modalities is particularly difficult, due to differences in tissue HUs and the presence of imaging artifacts. For this reason, DIR algorithms must be validated specifically for this purpose. The developed H and N phantom is an effective tool

  15. An adaptive Lagrangian boundary element approach for three-dimensional transient free-surface Stokes flow as applied to extrusion, thermoforming, and rheometry

    NASA Astrophysics Data System (ADS)

    Khayat, Roger E.; Genouvrier, Delphine

    2001-05-01

    An adaptive (Lagrangian) boundary element approach is proposed for the general three-dimensional simulation of confined free-surface Stokes flow. The method is stable as it includes remeshing capabilities of the deforming free surface and thus can handle large deformations. A simple algorithm is developed for mesh refinement of the deforming free-surface mesh. Smooth transition between large and small elements is achieved without significant degradation of the aspect ratio of the elements in the mesh. Several flow problems are presented to illustrate the utility of the approach, particularly as encountered in polymer processing and rheology. These problems illustrate the transient nature of the flow during the processes of extrusion and thermoforming, the elongation of a fluid sample in an extensional rheometer, and the coating of a sphere. Surface tension effects are also explored. Copyright

  16. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  17. High-Resolution Numerical Simulation and Analysis of Mach Reflection Structures in Detonation Waves in Low-Pressure H 2 –O 2 –Ar Mixtures: A Summary of Results Obtained with the Adaptive Mesh Refinement Framework AMROC

    DOE PAGES

    Deiterding, Ralf

    2011-01-01

    Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less

  18. Optimal Throughput and Self-adaptability of Robust Real-Time IEEE 802.15.4 MAC for AMI Mesh Network

    NASA Astrophysics Data System (ADS)

    Shabani, Hikma; Mohamud Ahmed, Musse; Khan, Sheroz; Hameed, Shahab Ahmed; Hadi Habaebi, Mohamed

    2013-12-01

    A smart grid refers to a modernization of the electricity system that brings intelligence, reliability, efficiency and optimality to the power grid. To provide an automated and widely distributed energy delivery, the smart grid will be branded by a two-way flow of electricity and information system between energy suppliers and their customers. Thus, the smart grid is a power grid that integrates data communication networks which provide the collected and analysed data at all levels in real time. Therefore, the performance of communication systems is so vital for the success of smart grid. Merit to the ZigBee/IEEE802.15.4std low cost, low power, low data rate, short range, simplicity and free licensed spectrum that makes wireless sensor networks (WSNs) the most suitable wireless technology for smart grid applications. Unfortunately, almost all ZigBee channels overlap with wireless local area network (WLAN) channels, resulting in severe performance degradation due to interference. In order to improve the performance of communication systems, this paper proposes an optimal throughput and self-adaptability of ZigBee/IEEE802.15.4std for smart grid.

  19. WE-AB-BRA-09: Sensitivity of Plan Re-Optimization to Errors in Deformable Image Registration in Online Adaptive Image-Guided Radiation Therapy

    SciTech Connect

    McClain, B; Olsen, J; Green, O; Yang, D; Santanam, L; Olsen, L; Zhao, T; Rodriguez, V; Wooten, H; Mutic, S; Kashani, R; Victoria, J; Dempsey, J

    2015-06-15

    Purpose: Online adaptive therapy (ART) relies on auto-contouring using deformable image registration (DIR). DIR’s inherent uncertainties require user intervention and manual edits while the patient is on the table. We investigated the dosimetric impact of DIR errors on the quality of re-optimized plans, and used the findings to establish regions for focusing manual edits to where DIR errors can Result in clinically relevant dose differences. Methods: Our clinical implementation of online adaptive MR-IGRT involves using DIR to transfer contours from CT to daily MR, followed by a physicians’ edits. The plan is then re-optimized to meet the organs at risk (OARs) constraints. Re-optimized abdomen and pelvis plans generated based on physician edited OARs were selected as the baseline for evaluation. Plans were then re-optimized on auto-deformed contours with manual edits limited to pre-defined uniform rings (0 to 5cm) around the PTV. A 0cm ring indicates that the auto-deformed OARs were used without editing. The magnitude of the variations caused by the non-deterministic optimizer was quantified by repeat re-optimizations on the same geometry to determine the mean and standard deviation (STD). For each re-optimized plan, various volumetric parameters for the PTV, the OARs were extracted along with DVH and isodose evaluation. A plan was deemed acceptable if the variation from the baseline plan was within one STD. Results: Initial results show that for abdomen and pancreas cases, a minimum of 5cm margin around the PTV is required for contour corrections, while for pelvic and liver cases a 2–3 cm margin is sufficient. Conclusion: Focusing manual contour edits to regions of dosimetric relevance can reduce contouring time in the online ART process while maintaining a clinically comparable plan. Future work will further refine the contouring region by evaluating the path along the beams, dose gradients near the target and OAR dose metrics.

  20. Toward adaptive radiotherapy for head and neck patients: Feasibility study on using CT-to-CBCT deformable registration for “dose of the day” calculations

    SciTech Connect

    Veiga, Catarina Lourenço, Ana; Ricketts, Kate; Annkah, James; Royle, Gary; McClelland, Jamie; Modat, Marc; Ourselin, Sébastien; Moinuddin, Syed; D’Souza, Derek

    2014-03-15

    a replan CT. The DD is smaller than 2% of the prescribed dose on 90% of the body's voxels and it passes a 2% and 2 mm gamma-test on over 95% of the voxels. Target coverage similarity was assessed in terms of the 95%-isodose volumes. A mean value of 0.962 was obtained for the DSC, while the distance between surfaces is less than 2 mm in 95.4% of the pixels. The method proposed provided adequate dose estimation, closer to the gold standard than the other two approaches. Differences in DVH curves were mainly due to differences in the OARs definition (manual vs warped) and not due to differences in dose estimation (dose calculated in replan CT vs dose calculated in deformed CT). Conclusions: Deforming a planning CT to match a daily CBCT provides the tools needed for the calculation of the “dose of the day” without the need to acquire a new CT. The initial clinical application of our method will be weekly offline calculations of the “dose of the day,” and use this information to inform adaptive radiotherapy (ART). The work here presented is a first step into a full implementation of a “dose-driven” online ART.

  1. Algebraic mesh quality metrics

    SciTech Connect

    KNUPP,PATRICK

    2000-04-24

    Quality metrics for structured and unstructured mesh generation are placed within an algebraic framework to form a mathematical theory of mesh quality metrics. The theory, based on the Jacobian and related matrices, provides a means of constructing, classifying, and evaluating mesh quality metrics. The Jacobian matrix is factored into geometrically meaningful parts. A nodally-invariant Jacobian matrix can be defined for simplicial elements using a weight matrix derived from the Jacobian matrix of an ideal reference element. Scale and orientation-invariant algebraic mesh quality metrics are defined. the singular value decomposition is used to study relationships between metrics. Equivalence of the element condition number and mean ratio metrics is proved. Condition number is shown to measure the distance of an element to the set of degenerate elements. Algebraic measures for skew, length ratio, shape, volume, and orientation are defined abstractly, with specific examples given. Combined metrics for shape and volume, shape-volume-orientation are algebraically defined and examples of such metrics are given. Algebraic mesh quality metrics are extended to non-simplical elements. A series of numerical tests verify the theoretical properties of the metrics defined.

  2. Multislope MUSCL method for general unstructured meshes

    NASA Astrophysics Data System (ADS)

    Le Touze, C.; Murrone, A.; Guillard, H.

    2015-03-01

    The multislope concept has been recently introduced in the literature to deal with MUSCL reconstructions on triangular and tetrahedral unstructured meshes in the finite volume cell-centered context. Dedicated scalar slopes are used to compute the interpolations on each face of a given element, in opposition to the monoslope methods in which a unique limited gradient is used. The multislope approach reveals less expensive and potentially more accurate than the classical gradient techniques. Besides, it may also help the robustness when dealing with hyperbolic systems involving complex solutions, with large discontinuities and high density ratios. However some important limitations on the mesh topology still have to be overcome with the initial multislope formalism. In this paper, a generalized multislope MUSCL method is introduced for cell-centered finite volume discretizations. The method is freed from constraints on the mesh topology, thereby operating on completely general unstructured meshes. Moreover optimal second-order accuracy is reached at the faces centroids. The scheme can be written with nonnegative coefficients, which makes it L∞-stable. Special attention has also been paid to equip the reconstruction procedure with well-adapted dedicated limiters, potentially CFL-dependent. Numerical tests are provided to prove the ability of the method to deal with completely general meshes, while exhibiting second-order accuracy.

  3. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  4. A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Solution of the Euler Equations

    SciTech Connect

    Anderson, R W; Elliott, N S; Pember, R B

    2003-02-14

    A new method that combines staggered grid arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the methods are driven by the need to reconcile traditional AMR techniques with the staggered variables and moving, deforming meshes associated with Lagrange based ALE schemes. We develop interlevel solution transfer operators and interlevel boundary conditions first in the case of purely Lagrangian hydrodynamics, and then extend these ideas into an ALE method by developing adaptive extensions of elliptic mesh relaxation techniques. Conservation properties of the method are analyzed, and a series of test problem calculations are presented which demonstrate the utility and efficiency of the method.

  5. Documentation for MeshKit - Reactor Geometry (&mesh) Generator

    SciTech Connect

    Jain, Rajeev; Mahadevan, Vijay

    2015-09-30

    This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.

  6. Investigation of Closed Loop Adaptive Optics with the Deformable Mirror not in Pupil- Part 2: Theory (POSTPRINT)

    DTIC Science & Technology

    2008-07-01

    adaptive secondary minimizes surfaces thereby maximizing sensitivity making it particularly suitable for thermal infrared astronomy . For infrared astronomy ...to be 40 % of the diffraction limited peak. The Large Binocular Telescope (LBT) on Mt. Graham, Arizona has two 8.4m primary mirrors and two 0.91m

  7. Euler and Navier-Stokes Computations for Two-Dimensional Geometries Using Unstructured Meshes

    DTIC Science & Technology

    1990-01-01

    by the simultaneous use of adaptive meshing and an unstructured multigrid technique . A method for generating highly stretched triangulations in regions...are enhanced by the simultanious use of adaptive meshing and an unstruc- tured multigrid technique . A method for generating highly stretched triangula...unstructured mesh solver for steady-state two-dimensional inviscid and viscous flows is described. The efficiency and accuracy of the method are enhanced

  8. Comparison of the fracture resistances of glass fiber mesh- and metal mesh-reinforced maxillary complete denture under dynamic fatigue loading

    PubMed Central

    2017-01-01

    PURPOSE The aim of this study was to investigate the effect of reinforcing materials on the fracture resistances of glass fiber mesh- and Cr–Co metal mesh-reinforced maxillary complete dentures under fatigue loading. MATERIALS AND METHODS Glass fiber mesh- and Cr–Co mesh-reinforced maxillary complete dentures were fabricated using silicone molds and acrylic resin. A control group was prepared with no reinforcement (n = 15 per group). After fatigue loading was applied using a chewing simulator, fracture resistance was measured by a universal testing machine. The fracture patterns were analyzed and the fractured surfaces were observed by scanning electron microscopy. RESULTS After cyclic loading, none of the dentures showed cracks or fractures. During fracture resistance testing, all unreinforced dentures experienced complete fracture. The mesh-reinforced dentures primarily showed posterior framework fracture. Deformation of the all-metal framework caused the metal mesh-reinforced denture to exhibit the highest fracture resistance, followed by the glass fiber mesh-reinforced denture (P<.05) and the control group (P<.05). The glass fiber mesh-reinforced denture primarily maintained its original shape with unbroken fibers. River line pattern of the control group, dimples and interdendritic fractures of the metal mesh group, and radial fracture lines of the glass fiber group were observed on the fractured surfaces. CONCLUSION The glass fiber mesh-reinforced denture exhibits a fracture resistance higher than that of the unreinforced denture, but lower than that of the metal mesh-reinforced denture because of the deformation of the metal mesh. The glass fiber mesh-reinforced denture maintains its shape even after fracture, indicating the possibility of easier repair. PMID:28243388

  9. Finite element based electrostatic-structural coupled analysis with automated mesh morphing

    SciTech Connect

    OWEN,STEVEN J.; ZHULIN,V.I.; OSTERGAARD,D.F.

    2000-02-29

    A co-simulation tool based on finite element principles has been developed to solve coupled electrostatic-structural problems. An automated mesh morphing algorithm has been employed to update the field mesh after structural deformation. The co-simulation tool has been successfully applied to the hysteric behavior of a MEMS switch.

  10. SU-E-J-102: Performance Variations Among Clinically Available Deformable Image Registration Tools in Adaptive Radiotherapy: How Should We Evaluate and Interpret the Result?

    SciTech Connect

    Nie, K; Pouliot, J; Smith, E; Chuang, C

    2015-06-15

    Purpose: To evaluate the performance variations in commercial deformable image registration (DIR) tools for adaptive radiation therapy. Methods: Representative plans from three different anatomical sites, prostate, head-and-neck (HN) and cranial spinal irradiation (CSI) with L-spine boost, were included. Computerized deformed CT images were first generated using virtual DIR QA software (ImSimQA) for each case. The corresponding transformations served as the “reference”. Three commercial software packages MIMVista v5.5 and MIMMaestro v6.0, VelocityAI v2.6.2, and OnQ rts v2.1.15 were tested. The warped contours and doses were compared with the “reference” and among each other. Results: The performance in transferring contours was comparable among all three tools with an average DICE coefficient of 0.81 for all the organs. However, the performance of dose warping accuracy appeared to rely on the evaluation end points. Volume based DVH comparisons were not sensitive enough to illustrate all the detailed variations while isodose assessment on a slice-by-slice basis could be tedious. Point-based evaluation was over-sensitive by having up to 30% hot/cold-spot differences. If adapting the 3mm/3% gamma analysis into the evaluation of dose warping, all three algorithms presented a reasonable level of equivalency. One algorithm had over 10% of the voxels not meeting this criterion for the HN case while another showed disagreement for the CSI case. Conclusion: Overall, our results demonstrated that evaluation based only on the performance of contour transformation could not guarantee the accuracy in dose warping. However, the performance of dose warping accuracy relied on the evaluation methodologies. Nevertheless, as more DIR tools are available for clinical use, the performance could vary at certain degrees. A standard quality assurance criterion with clinical meaning should be established for DIR QA, similar to the gamma index concept, in the near future.

  11. Toward An Unstructured Mesh Database

    NASA Astrophysics Data System (ADS)

    Rezaei Mahdiraji, Alireza; Baumann, Peter Peter

    2014-05-01

    Unstructured meshes are used in several application domains such as earth sciences (e.g., seismology), medicine, oceanography, cli- mate modeling, GIS as approximate representations of physical objects. Meshes subdivide a domain into smaller geometric elements (called cells) which are glued together by incidence relationships. The subdivision of a domain allows computational manipulation of complicated physical structures. For instance, seismologists model earthquakes using elastic wave propagation solvers on hexahedral meshes. The hexahedral con- tains several hundred millions of grid points and millions of hexahedral cells. Each vertex node in the hexahedrals stores a multitude of data fields. To run simulation on such meshes, one needs to iterate over all the cells, iterate over incident cells to a given cell, retrieve coordinates of cells, assign data values to cells, etc. Although meshes are used in many application domains, to the best of our knowledge there is no database vendor that support unstructured mesh features. Currently, the main tool for querying and manipulating unstructured meshes are mesh libraries, e.g., CGAL and GRAL. Mesh li- braries are dedicated libraries which includes mesh algorithms and can be run on mesh representations. The libraries do not scale with dataset size, do not have declarative query language, and need deep C++ knowledge for query implementations. Furthermore, due to high coupling between the implementations and input file structure, the implementations are less reusable and costly to maintain. A dedicated mesh database offers the following advantages: 1) declarative querying, 2) ease of maintenance, 3) hiding mesh storage structure from applications, and 4) transparent query optimization. To design a mesh database, the first challenge is to define a suitable generic data model for unstructured meshes. We proposed ImG-Complexes data model as a generic topological mesh data model which extends incidence graph model to multi

  12. Constant-mesh, multiple-shaft transmission

    SciTech Connect

    Rea, J.E.; Mills, D.D.; Sewell, J.S.

    1992-04-21

    This patent describes a multiple-shaft, constant-mesh transmission adapted to establish selectively a reverse torque delivery path and a forward drive torque delivery path and having a torque input means including a torque input shaft, a mainshaft aligned with the input shaft, a countershaft geared to the input shaft in spaced, parallel relationship with respect to the mainshaft, a torque output shaft joined to the mainshaft; multiple mainshaft gear elements journalled on the main airshaft, multiple cluster gear elements carried by the countershaft in meshing engagement with the mainshaft gear elements, one of the cluster gear elements being rotatably journalled on the countershaft; a reverse idle gear, a reverse gear journalled on the countershaft, the reverse idler gear being in constant mesh with the reverse gear and one of the mainshaft gear elements; first clutch means for connecting selectively the reverse gear and the countershaft; second synchronizer clutch means for connecting selectively the one of the mainshaft gear elements to the mainshaft; and third synchronizer clutch means for selectively connecting another of the mainshaft gear elements to the mainshaft; the first clutch means being a double-acting clutch with a first common axially movable clutch element adapted upon movement in one axial direction to drivably connected the reverse gear to the countershaft and adapted upon movement in the opposite axial direction to connect the one cluster gear element to the countershaft.

  13. Downgoing plate controls on overriding plate deformation in subduction zones

    NASA Astrophysics Data System (ADS)

    Garel, Fanny; Davies, Rhodri; Goes, Saskia; Davies, Huw; Kramer, Stephan; Wilson, Cian

    2014-05-01

    Although subduction zones are convergent margins, deformation in the upper plate can be extensional or compressional and tends to change through time, sometimes in repeated episodes of strong deformation, e.g, phases of back-arc extension. It is not well understood what factors control this upper plate deformation. We use the code Fluidity, which uses an adaptive mesh and a free-surface formulation, to model a two-plate subduction system in 2-D. The model includes a composite temperature- and stress-dependent rheology, and plates are decoupled by a weak layer, which allows for free trench motion. We investigate the evolution of the state of stress and topography of the overriding plate during the different phases of the subduction process: onset of subduction, free-fall sinking in the upper mantle and interaction of the slab with the transition zone, here represented by a viscosity contrast between upper and lower mantle. We focus on (i) how overriding plate deformation varies with subducting plate age; (ii) how spontaneous and episodic back-arc spreading develops for some subduction settings; (iii) the correlation between overriding plate deformation and slab interaction with the transition zone; (iv) whether these trends resemble observations on Earth.

  14. Evaluation of 4-dimensional Computed Tomography to 4-dimensional Cone-Beam Computed Tomography Deformable Image Registration for Lung Cancer Adaptive Radiation Therapy

    SciTech Connect

    Balik, Salim; Weiss, Elisabeth; Jan, Nuzhat; Roman, Nicholas; Sleeman, William C.; Fatyga, Mirek; Christensen, Gary E.; Zhang, Cheng; Murphy, Martin J.; Lu, Jun; Keall, Paul; Williamson, Jeffrey F.; Hugo, Geoffrey D.

    2013-06-01

    Purpose: To evaluate 2 deformable image registration (DIR) algorithms for the purpose of contour mapping to support image-guided adaptive radiation therapy with 4-dimensional cone-beam CT (4DCBCT). Methods and Materials: One planning 4D fan-beam CT (4DFBCT) and 7 weekly 4DCBCT scans were acquired for 10 locally advanced non-small cell lung cancer patients. The gross tumor volume was delineated by a physician in all 4D images. End-of-inspiration phase planning 4DFBCT was registered to the corresponding phase in weekly 4DCBCT images for day-to-day registrations. For phase-to-phase registration, the end-of-inspiration phase from each 4D image was registered to the end-of-expiration phase. Two DIR algorithms—small deformation inverse consistent linear elastic (SICLE) and Insight Toolkit diffeomorphic demons (DEMONS)—were evaluated. Physician-delineated contours were compared with the warped contours by using the Dice similarity coefficient (DSC), average symmetric distance, and false-positive and false-negative indices. The DIR results are compared with rigid registration of tumor. Results: For day-to-day registrations, the mean DSC was 0.75 ± 0.09 with SICLE, 0.70 ± 0.12 with DEMONS, 0.66 ± 0.12 with rigid-tumor registration, and 0.60 ± 0.14 with rigid-bone registration. Results were comparable to intraobserver variability calculated from phase-to-phase registrations as well as measured interobserver variation for 1 patient. SICLE and DEMONS, when compared with rigid-bone (4.1 mm) and rigid-tumor (3.6 mm) registration, respectively reduced the average symmetric distance to 2.6 and 3.3 mm. On average, SICLE and DEMONS increased the DSC to 0.80 and 0.79, respectively, compared with rigid-tumor (0.78) registrations for 4DCBCT phase-to-phase registrations. Conclusions: Deformable image registration achieved comparable accuracy to reported interobserver delineation variability and higher accuracy than rigid-tumor registration. Deformable image registration

  15. SU-E-J-109: Evaluation of Deformable Accumulated Parotid Doses Using Different Registration Algorithms in Adaptive Head and Neck Radiotherapy

    SciTech Connect

    Xu, S; Liu, B

    2015-06-15

    Purpose: Three deformable image registration (DIR) algorithms are utilized to perform deformable dose accumulation for head and neck tomotherapy treatment, and the differences of the accumulated doses are evaluated. Methods: Daily MVCT data for 10 patients with pathologically proven nasopharyngeal cancers were analyzed. The data were acquired using tomotherapy (TomoTherapy, Accuray) at the PLA General Hospital. The prescription dose to the primary target was 70Gy in 33 fractions.Three DIR methods (B-spline, Diffeomorphic Demons and MIMvista) were used to propagate parotid structures from planning CTs to the daily CTs and accumulate fractionated dose on the planning CTs. The mean accumulated doses of parotids were quantitatively compared and the uncertainties of the propagated parotid contours were evaluated using Dice similarity index (DSI). Results: The planned mean dose of the ipsilateral parotids (32.42±3.13Gy) was slightly higher than those of the contralateral parotids (31.38±3.19Gy)in 10 patients. The difference between the accumulated mean doses of the ipsilateral parotids in the B-spline, Demons and MIMvista deformation algorithms (36.40±5.78Gy, 34.08±6.72Gy and 33.72±2.63Gy ) were statistically significant (B-spline vs Demons, P<0.0001, B-spline vs MIMvista, p =0.002). And The difference between those of the contralateral parotids in the B-spline, Demons and MIMvista deformation algorithms (34.08±4.82Gy, 32.42±4.80Gy and 33.92±4.65Gy ) were also significant (B-spline vs Demons, p =0.009, B-spline vs MIMvista, p =0.074). For the DSI analysis, the scores of B-spline, Demons and MIMvista DIRs were 0.90, 0.89 and 0.76. Conclusion: Shrinkage of parotid volumes results in the dose increase to the parotid glands in adaptive head and neck radiotherapy. The accumulated doses of parotids show significant difference using the different DIR algorithms between kVCT and MVCT. Therefore, the volume-based criterion (i.e. DSI) as a quantitative evaluation of

  16. Fabrication of compliant hybrid grafts supported with elastomeric meshes.

    PubMed

    Kobashi, T; Matsuda, T

    1999-01-01

    We devised tubular hybrid medial tissues with mechanical properties similar to those of native arteries, which were composed of bovine smooth muscle cells (SMCs) and type I collagen with minimal reinforcement with knitted fabric meshes made of synthetic elastomers. Three hybrid medial tissue models that incorporated segmented polyester (mesh A) or polyurethane-nylon (mesh B) meshes were designed: the inner, sandwich, and wrapping models. Hybrid medial tissues were prepared by pouring a cold mixed solution of SMCs and collagen into a tubular glass mold consisting of an inner mandrel and an outer sheath and subsequent thermal gelation, followed by further culture for 7 days. For the inner model, the mandrel was wrapped with a mesh. For the sandwich model, a cylindrically shaped mesh was incorporated into a space between the mandrel and the sheath. The wrapping model was prepared by wrapping a 7-day-incubated nonmesh gel with a mesh. The inner diameter was 3 mm, irrespective of the model, and the length was 2.5-4.0 cm, depending on the model. The intraluminal pressure-external diameter relationship showed that nonmesh and inner models had a very low burst strength below 50 mmHg, while the sandwich model ruptured at around 110-120 mmHg; no rupturing below 240 mmHg was observed for the wrapping model, regardless of the type of mesh used. Compliance values of wrapping and sandwich models were close to those of native arteries. Pressure-dependent distensibility characteristics similar to native arteries were observed for a mesh A wrapping model, whereas a mesh B wrapping model expanded almost linearly as intraluminal pressure increased, which appeared to be due to elasticity of the incorporated mesh. Thus, design criteria for hybrid vascular grafts with appropriate biomechanical matching with host arteries were established. Such hybrid grafts may be mechanically adapted in an arterial system.

  17. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  18. Surgical mesh for ventral incisional hernia repairs: Understanding mesh design

    PubMed Central

    Rastegarpour, Ali; Cheung, Michael; Vardhan, Madhurima; Ibrahim, Mohamed M; Butler, Charles E; Levinson, Howard

    2016-01-01

    Surgical mesh has become an indispensable tool in hernia repair to improve outcomes and reduce costs; however, efforts are constantly being undertaken in mesh development to overcome postoperative complications. Common complications include infection, pain, adhesions, mesh extrusion and hernia recurrence. Reducing the complications of mesh implantation is of utmost importance given that hernias occur in hundreds of thousands of patients per year in the United States. In the present review, the authors present the different types of hernia meshes, discuss the key properties of mesh design, and demonstrate how each design element affects performance and complications. The present article will provide a basis for surgeons to understand which mesh to choose for patient care and why, and will explain the important technological aspects that will continue to evolve over the ensuing years. PMID:27054138

  19. A general boundary capability embedded in an orthogonal mesh

    SciTech Connect

    Hewett, D.W.; Yu-Jiuan Chen

    1995-07-01

    The authors describe how they hold onto orthogonal mesh discretization when dealing with curved boundaries. Special difference operators were constructed to approximate numerical zones split by the domain boundary; the operators are particularly simple for this rectangular mesh. The authors demonstrated that this simple numerical approach, termed Dynamic Alternating Direction Implicit, turned out to be considerably more efficient than more complex grid-adaptive algorithms that were tried previously.

  20. SUPERIMPOSED MESH PLOTTING IN MCNP

    SciTech Connect

    J. HENDRICKS

    2001-02-01

    The capability to plot superimposed meshes has been added to MCNP{trademark}. MCNP4C featured a superimposed mesh weight window generator which enabled users to set up geometries without having to subdivide geometric cells for variance reduction. The variance reduction was performed with weight windows on a rectangular or cylindrical mesh superimposed over the physical geometry. Experience with the new capability was favorable but also indicated that a number of enhancements would be very beneficial, particularly a means of visualizing the mesh and its values. The mathematics for plotting the mesh and its values is described here along with a description of other upgrades.

  1. Iterative Mesh Transformation for 3D Segmentation of Livers with Cancers in CT Images

    PubMed Central

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-01-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semiautomated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases. PMID:25728595

  2. Iterative mesh transformation for 3D segmentation of livers with cancers in CT images.

    PubMed

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-07-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semi-automated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases.

  3. Particle-mesh techniques

    NASA Technical Reports Server (NTRS)

    Macneice, Peter

    1995-01-01

    This is an introduction to numerical Particle-Mesh techniques, which are commonly used to model plasmas, gravitational N-body systems, and both compressible and incompressible fluids. The theory behind this approach is presented, and its practical implementation, both for serial and parallel machines, is discussed. This document is based on a four-hour lecture course presented by the author at the NASA Summer School for High Performance Computational Physics, held at Goddard Space Flight Center.

  4. Delaunay Refinement Mesh Generation

    DTIC Science & Technology

    1997-05-18

    GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S...146 6.2 Related Work in Robust Computational Geometry . . . . . . . . . . . . . . . . . . . . . . . 148 6.3...during my seven years at Carnegie Mellon. Most of this work was carried out at the 61c Café in Pittsburgh. v vi Chapter 1 Introduction Meshes composed

  5. An Interpreted Language and System for the Visualization of Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Gerald-Yamasaki, Michael (Technical Monitor)

    1998-01-01

    We present an interpreted language and system supporting the visualization of unstructured meshes and the manipulation of shapes defined in terms of mesh subsets. The language features primitives inspired by geometric modeling, mathematical morphology and algebraic topology. The adaptation of the topology ideas to an interpreted environment, along with support for programming constructs such, as user function definition, provide a flexible system for analyzing a mesh and for calculating with shapes defined in terms of the mesh. We present results demonstrating some of the capabilities of the language, based on an implementation called the Shape Calculator, for tetrahedral meshes in R^3.

  6. Smooth Rotation Enhanced As-Rigid-As-Possible Mesh Animation.

    PubMed

    Levi, Zohar; Gotsman, Craig

    2015-02-01

    In recent years, the As-Rigid-As-Possible (ARAP) shape deformation and shape interpolation techniques gained popularity, and the ARAP energy was successfully used in other applications as well. We improve the ARAP animation technique in two aspects. First, we introduce a new ARAP-type energy, named SR-ARAP, which has a consistent discretization for surfaces (triangle meshes). The quality of our new surface deformation scheme competes with the quality of the volumetric ARAP deformation (for tetrahedral meshes). Second, we propose a new ARAP shape interpolation method that is superior to prior art also based on the ARAP energy. This method is compatible with our new SR-ARAP energy, as well as with the ARAP volume energy.

  7. Mesh Algorithms for PDE with Sieve I: Mesh Distribution

    DOE PAGES

    Knepley, Matthew G.; Karpeev, Dmitry A.

    2009-01-01

    We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less

  8. Characterizing the ex vivo mechanical properties of synthetic polypropylene surgical mesh.

    PubMed

    Li, Xinxin; Kruger, Jennifer A; Jor, Jessica W Y; Wong, Vivien; Dietz, Hans P; Nash, Martyn P; Nielsen, Poul M F

    2014-09-01

    The use of synthetic polypropylene mesh for hernia surgical repair and the correction of female pelvic organ prolapse have been controversial due to increasing post-operative complications, including mesh erosion, chronic pain, infection and support failure. These morbidities may be related to a mismatch of mechanical properties between soft tissues and the mesh. The aim of this study was to gain a better understanding of the biomechanical behavior of Prolene polypropylene mesh (Ethicon, Sommerville, NJ, USA), which is widely used for a variety of surgical repair procedures. The stiffness and permanent deformation of Prolene mesh were compared in different directions by performing uniaxial tensile failure tests, cyclic and creep tests at simulated physiological loads in the coursewise (0°), walewise (90°) and the diagonal (45°) directions. Failure tests suggest that the mechanical properties of the mesh is anisotropic; with response at 0° being the most compliant while 90° was the stiffest. Irreversible deformation and viscoelastic behavior were observed in both cyclic and creep tests. The anisotropic property may be relevant to the placement of mesh in surgery to maximize long term mesh performance. The considerable permanent deformation may be associated with an increased risk of post-operative support failure.

  9. Mesh generation and computational modeling techniques for bioimpedance measurements: an example using the VHP data

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Salamatova, V. Yu; Vassilevski, Yu V.

    2012-12-01

    Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.

  10. Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution

    SciTech Connect

    Chacon De La Rosa, Luis; Delzanno, Gian Luca; Finn, John M.

    2011-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of this class of techniques has been the formulation of robust, reliable mesh-motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1] and [2], the advantages of this approach with regard to these points have been demonstrated for the time-independent case. In this study, we demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time-stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh-motion approaches, without resorting to ad hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3] and [4]). We explore two distinct r-refinement implementations of MK: the direct method, where the current mesh relates to an initial, unchanging mesh, and the sequential method, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior with regard to mesh distortion and robustness. The properties of the approach are illustrated with a hyperbolic PDE, the advection of a passive scalar, in 2D and 3D. Velocity flow fields with and without flow shear are considered. Three-dimensional grid, time-step, and nonlinear tolerance convergence studies are presented which demonstrate the optimality of the approach.

  11. Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution

    SciTech Connect

    Delzanno, G L; Finn, J M

    2009-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.

  12. Stress Recovery Based h-Adaptive Finite Element Simulation of Sheet Forming Operations

    NASA Astrophysics Data System (ADS)

    Ahmed, Mohd.; Singh, Devinder

    2016-07-01

    In the present work, stress recovery techniques based adaptive finite element analysis of sheet forming operations is presented. An adaptive two dimensional finite element computer code allows the analysis of sheet forming operations and results in distribution of adaptively refined mesh, effective strain, and punch load, stress and strain rate tensor in the domain that has been developed. The recovery scheme for determining more accurate stress field is based on the least squares fitting of the computed stresses in an element patch surrounding and including a particular node. The solution error is estimated on the basis of an energy norm. It is shown with the help of an illustrative example of axi-symmetric stretching of a metal blank by a hemispherical punch that the adaptive analysis may be usefully employed to predict accurately deformation process, the seats of large deformations and locations of possible instability.

  13. Multiresolution mesh segmentation based on surface roughness and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Roudet, Céline; Dupont, Florent; Baskurt, Atilla

    2007-01-01

    During the last decades, the three-dimensional objects have begun to compete with traditional multimedia (images, sounds and videos) and have been used by more and more applications. The common model used to represent them is a surfacic mesh due to its intrinsic simplicity and efficacity. In this paper, we present a new algorithm for the segmentation of semi-regular triangle meshes, via multiresolution analysis. Our method uses several measures which reflect the roughness of the surface for all meshes resulting from the decomposition of the initial model into different fine-to-coarse multiresolution meshes. The geometric data decomposition is based on the lifting scheme. Using that formulation, we have compared various interpolant prediction operators, associated or not with an update step. For each resolution level, the resulting approximation mesh is then partitioned into classes having almost constant roughness thanks to a clustering algorithm. Resulting classes gather regions having the same visual appearance in term of roughness. The last step consists in decomposing the mesh into connex groups of triangles using region growing ang merging algorithms. These connex surface patches are of particular interest for adaptive mesh compression, visualisation, indexation or watermarking.

  14. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    NASA Astrophysics Data System (ADS)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  15. Tangle-Free Mesh Motion for Ablation Simulations

    NASA Technical Reports Server (NTRS)

    Droba, Justin

    2016-01-01

    Problems involving mesh motion-which should not be mistakenly associated with moving mesh methods, a class of adaptive mesh redistribution techniques-are of critical importance in numerical simulations of the thermal response of melting and ablative materials. Ablation is the process by which material vaporizes or otherwise erodes due to strong heating. Accurate modeling of such materials is of the utmost importance in design of passive thermal protection systems ("heatshields") for spacecraft, the layer of the vehicle that ensures survival of crew and craft during re-entry. In an explicit mesh motion approach, a complete thermal solve is first performed. Afterwards, the thermal response is used to determine surface recession rates. These values are then used to generate boundary conditions for an a posteriori correction designed to update the location of the mesh nodes. Most often, linear elastic or biharmonic equations are used to model this material response, traditionally in a finite element framework so that complex geometries can be simulated. A simple scheme for moving the boundary nodes involves receding along the surface normals. However, for all but the simplest problem geometries, evolution in time following such a scheme will eventually bring the mesh to intersect and "tangle" with itself, inducing failure. This presentation demonstrates a comprehensive and sophisticated scheme that analyzes the local geometry of each node with help from user-provided clues to eliminate the tangle and enable simulations on a wide-class of difficult problem geometries. The method developed is demonstrated for linear elastic equations but is general enough that it may be adapted to other modeling equations. The presentation will explicate the inner workings of the tangle-free mesh motion algorithm for both two and three-dimensional meshes. It will show abstract examples of the method's success, including a verification problem that demonstrates its accuracy and

  16. Creating Interoperable Meshing and Discretization Software: The Terascale Simulation Tools and Technology Center

    SciTech Connect

    Brown, D.; Freitag, L.; Glimm, J.

    2002-03-28

    We present an overview of the technical objectives of the Terascale Simulation Tools and Technologies center. The primary goal of this multi-institution collaboration is to develop technologies that enable application scientists to easily use multiple mesh and discretization strategies within a single simulation on terascale computers. The discussion focuses on our efforts to create interoperable mesh generation tools, high-order discretization techniques, and adaptive meshing strategies.

  17. Host response to synthetic mesh in women with mesh complications

    PubMed Central

    Nolfi, Alexis L.; Brown, Bryan N.; Liang, Rui; Palcsey, Stacy L.; Bonidie, Michael J.; Abramowitch, Steven D.; Moalli, Pamela A.

    2016-01-01

    BACKGROUND Despite good anatomic and functional outcomes, urogynecologic polypropylene meshes that are used to treat pelvic organ prolapse and stress urinary incontinence are associated with significant complications, most commonly mesh exposure and pain. Few studies have been performed that specifically focus on the host response to urogynecologic meshes. The macrophage has long been known to be the key cell type that mediates the foreign body response. Conceptually, macrophages that respond to a foreign body can be dichotomized broadly into M1 proinflammatory and M2 proremodeling subtypes. A prolonged M1 response is thought to result in chronic inflammation and the formation of foreign body giant cells with potential for ongoing tissue damage and destruction. Although a limited M2 predominant response is favorable for tissue integration and ingrowth, excessive M2 activity can lead to accelerated fibrillar matrix deposition and result in fibrosis and encapsulation of the mesh. OBJECTIVE The purpose of this study was to define and compare the macrophage response in patients who undergo mesh excision surgery for the indication of pain vs a mesh exposure. STUDY DESIGN Patients who were scheduled to undergo a surgical excision of mesh for pain or exposure at Magee-Womens Hospital were offered enrollment. Twenty-seven mesh-vagina complexes that were removed for the primary complaint of a mesh exposure (n = 15) vs pain in the absence of an exposure (n = 12) were compared with 30 full-thickness vaginal biopsy specimens from women who underwent benign gynecologic surgery without mesh. Macrophage M1 proinflammatory vs M2 proremodeling phenotypes were examined via immunofluorescent labeling for cell surface markers CD86 (M1) vs CD206 (M2) and M1 vs M2 cytokines via enzyme-linked immunosorbent assay. The amount of matrix metalloproteinase-2 (MMP-2) and matrix metalloproteinase-9 (MMP-9) proteolytic enzymes were quantified by zymography and substrate degradation assays, as an

  18. Risk Factors for Mesh Exposure after Transvaginal Mesh Surgery

    PubMed Central

    Niu, Ke; Lu, Yong-Xian; Shen, Wen-Jie; Zhang, Ying-Hui; Wang, Wen-Ying

    2016-01-01

    Background: Mesh exposure after surgery continues to be a clinical challenge for urogynecological surgeons. The purpose of this study was to explore the risk factors for polypropylene (PP) mesh exposure after transvaginal mesh (TVM) surgery. Methods: This study included 195 patients with advanced pelvic organ prolapse (POP), who underwent TVM from January 2004 to December 2012 at the First Affiliated Hospital of Chinese PLA General Hospital. Clinical data were evaluated including patient's demography, TVM type, concomitant procedures, operation time, blood loss, postoperative morbidity, and mesh exposure. Mesh exposure was identified through postoperative vaginal examination. Statistical analysis was performed to identify risk factors for mesh exposure. Results: Two-hundred and nine transvaginal PP meshes were placed, including 194 in the anterior wall and 15 in the posterior wall. Concomitant tension-free vaginal tape was performed in 61 cases. The mean follow-up time was 35.1 ± 23.6 months. PP mesh exposure was identified in 32 cases (16.4%), with 31 in the anterior wall and 1 in the posterior wall. Significant difference was found in operating time and concomitant procedures between exposed and nonexposed groups (F = 7.443, P = 0.007; F = 4.307, P = 0.039, respectively). Binary logistic regression revealed that the number of concomitant procedures and operation time were risk factors for mesh exposure (P = 0.001, P = 0.043). Conclusion: Concomitant procedures and increased operating time increase the risk for postoperative mesh exposure in patients undergoing TVM surgery for POP. PMID:27453227

  19. SU-E-J-127: Real-Time Dosimetric Assessment for Adaptive Head-And-Neck Treatment Via A GPU-Based Deformable Image Registration Framework

    SciTech Connect

    Qi, S; Neylon, J; Chen, A; Low, D; Kupelian, P; Steinberg, M; Santhanam, A

    2014-06-01

    Purposes: To systematically monitor anatomic variations and their dosimetric consequences during head-and-neck (H'N) radiation therapy using a GPU-based deformable image registration (DIR) framework. Methods: Eleven H'N IMRT patients comprised the subject population. The daily megavoltage CT and weekly kVCT scans were acquired for each patient. The pre-treatment CTs were automatically registered with their corresponding planning CT through an in-house GPU-based DIR framework. The deformation of each contoured structure was computed to account for non-rigid change in the patient setup. The Jacobian determinant for the PTVs and critical structures was used to quantify anatomical volume changes. Dose accumulation was performed to determine the actual delivered dose and dose accumulation. A landmark tool was developed to determine the uncertainty in the dose distribution due to registration error. Results: Dramatic interfraction anatomic changes leading to dosimetric variations were observed. During the treatment courses of 6–7 weeks, the parotid gland volumes changed up to 34.7%, the center-of-mass displacement of the two parotids varied in the range of 0.9–8.8mm. Mean doses were within 5% and 3% of the planned mean doses for all PTVs and CTVs, respectively. The cumulative minimum/mean/EUD doses were lower than the planned doses by 18%, 2%, and 7%, respectively for the PTV1. The ratio of the averaged cumulative cord maximum doses to the plan was 1.06±0.15. The cumulative mean doses assessed by the weekly kVCTs were significantly higher than the planned dose for the left-parotid (p=0.03) and right-parotid gland (p=0.006). The computation time was nearly real-time (∼ 45 seconds) for registering each pre-treatment CT to the planning CT and dose accumulation with registration accuracy (for kVCT) at sub-voxel level (<1.5mm). Conclusions: Real-time assessment of anatomic and dosimetric variations is feasible using the GPU-based DIR framework. Clinical implementation

  20. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting

    SciTech Connect

    Kumarasiri, Akila Siddiqui, Farzan; Liu, Chang; Yechieli, Raphael; Shah, Mira; Pradhan, Deepak; Zhong, Hualiang; Chetty, Indrin J.; Kim, Jinkoo

    2014-12-15

    Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreement of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm{sup 3}. Organs with volumes <3 cm{sup 3} and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm{sup 3}), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours

  1. Invisible metallic mesh

    PubMed Central

    Ye, Dexin; Lu, Ling; Joannopoulos, John D.; Soljačić, Marin; Ran, Lixin

    2016-01-01

    A solid material possessing identical electromagnetic properties as air has yet to be found in nature. Such a medium of arbitrary shape would neither reflect nor refract light at any angle of incidence in free space. Here, we introduce nonscattering corrugated metallic wires to construct such a medium. This was accomplished by aligning the dark-state frequencies in multiple scattering channels of a single wire. Analytical solutions, full-wave simulations, and microwave measurement results on 3D printed samples show omnidirectional invisibility in any configuration. This invisible metallic mesh can improve mechanical stability, electrical conduction, and heat dissipation of a system, without disturbing the electromagnetic design. Our approach is simple, robust, and scalable to higher frequencies. PMID:26884208

  2. Quadrilateral finite element mesh coarsening

    SciTech Connect

    Staten, Matthew L; Dewey, Mark W; Benzley, Steven E

    2012-10-16

    Techniques for coarsening a quadrilateral mesh are described. These techniques include identifying a coarsening region within the quadrilateral mesh to be coarsened. Quadrilateral elements along a path through the coarsening region are removed. Node pairs along opposite sides of the path are identified. The node pairs along the path are then merged to collapse the path.

  3. Particle Mesh Hydrodynamics for Astrophysics Simulations

    NASA Astrophysics Data System (ADS)

    Chatelain, Philippe; Cottet, Georges-Henri; Koumoutsakos, Petros

    We present a particle method for the simulation of three dimensional compressible hydrodynamics based on a hybrid Particle-Mesh discretization of the governing equations. The method is rooted on the regularization of particle locations as in remeshed Smoothed Particle Hydrodynamics (rSPH). The rSPH method was recently introduced to remedy problems associated with the distortion of computational elements in SPH, by periodically re-initializing the particle positions and by using high order interpolation kernels. In the PMH formulation, the particles solely handle the convective part of the compressible Euler equations. The particle quantities are then interpolated onto a mesh, where the pressure terms are computed. PMH, like SPH, is free of the convection CFL condition while at the same time it is more efficient as derivatives are computed on a mesh rather than particle-particle interactions. PMH does not detract from the adaptive character of SPH and allows for control of its accuracy. We present simulations of a benchmark astrophysics problem demonstrating the capabilities of this approach.

  4. Micromachined, Electrostatically Deformable Reflectors

    NASA Technical Reports Server (NTRS)

    Bartman, Randall K.; Wang, Paul K. C.; Miller, Linda M.; Kenny, Thomas W.; Kaiser, William J.; Hadaegh, Fred Y.; Agronin, Michael L.

    1995-01-01

    Micromachined, closed-loop, electrostatically actuated reflectors (microCLEARs) provide relatively simple and inexpensive alternatives to large, complex, expensive adaptive optics used to control wavefronts of beams of light in astronomy and in experimental laser weapons. Micromachining used to make deformable mirror, supporting structure, and actuation circuitry. Development of microCLEARs may not only overcome some of disadvantages and limitations of older adaptive optics but may also satisfy demands of potential market for small, inexpensive deformable mirrors in electronically controlled film cameras, video cameras, and other commercial optoelectronic instruments.

  5. Which mesh for hernia repair?

    PubMed Central

    Brown, CN; Finch, JG

    2010-01-01

    INTRODUCTION The concept of using a mesh to repair hernias was introduced over 50 years ago. Mesh repair is now standard in most countries and widely accepted as superior to primary suture repair. As a result, there has been a rapid growth in the variety of meshes available and choosing the appropriate one can be difficult. This article outlines the general properties of meshes and factors to be considered when selecting one. MATERIALS AND METHODS We performed a search of the medical literature from 1950 to 1 May 2009, as indexed by Medline, using the PubMed search engine (). To capture all potentially relevant articles with the highest degree of sensitivity, the search terms were intentionally broad. We used the following terms: ‘mesh, pore size, strength, recurrence, complications, lightweight, properties’. We also hand-searched the bibliographies of relevant articles and product literature to identify additional pertinent reports. RESULTS AND CONCLUSIONS The most important properties of meshes were found to be the type of filament, tensile strength and porosity. These determine the weight of the mesh and its biocompatibility. The tensile strength required is much less than originally presumed and light-weight meshes are thought to be superior due to their increased flexibility and reduction in discomfort. Large pores are also associated with a reduced risk of infection and shrinkage. For meshes placed in the peritoneal cavity, consideration should also be given to the risk of adhesion formation. A variety of composite meshes have been promoted to address this, but none appears superior to the others. Finally, biomaterials such as acellular dermis have a place for use in infected fields but have yet to prove their worth in routine hernia repair. PMID:20501011

  6. On a Moving Mesh Method Applied to the Shallow Water Equations

    NASA Astrophysics Data System (ADS)

    Felcman, J.; Kadrnka, L.

    2010-09-01

    The moving mesh method is applied to the numerical solution of the shallow water equations. The original numerical flux of the Vijayasundaram type is used in the finite volume method. The mesh adaptation procedure is described. The relevant numerical examples are presented.

  7. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  8. Feature-driven deformation for dense correspondence

    NASA Astrophysics Data System (ADS)

    Ghosh, Deboshmita; Sharf, Andrei; Amenta, Nina

    2009-02-01

    Establishing reliable correspondences between object surfaces is a fundamental operation, required in many contexts such as cleaning up and completing imperfect captured data, texture and deformation trans- fer, shape-space analysis and exploration, and the automatic generation of realistic distributions of objects. We present a method for matching a template to a collection of possibly target meshes. Our method uses a very small number of user-placed landmarks, which we augment with automatically detected feature correspondences, found using spin images. We deform the template onto the data using an ICP-like framework, smoothing the noisy correspondences at each step so as to produce an averaged motion. The deformation uses a dierential representation of the mesh, with which the deformation can be computed at each iteration by solving a sparse linear system. We have applied our algorithm to a variety of data sets. Using only 11 landmarks between a template and one of the scans from the CEASAR data set, we are able to deform the template, and correctly identify and transfer distinctive features, which are not identied by user-supplied landmarks. We have also successfully established correspondences between several scans of monkey skulls, which have dangling triangles, non-manifold vertices, and self intersections. Our algorithm does not require a clean target mesh, and can even generate correspondence without trimming our extraneous pieces from the target mesh, such as scans of teeth.

  9. 21 CFR 878.3300 - Surgical mesh.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... GENERAL AND PLASTIC SURGERY DEVICES Prosthetic Devices § 878.3300 Surgical mesh. (a) Identification... acetabular and cement restrictor mesh used during orthopedic surgery. (b) Classification. Class II....

  10. 21 CFR 878.3300 - Surgical mesh.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... GENERAL AND PLASTIC SURGERY DEVICES Prosthetic Devices § 878.3300 Surgical mesh. (a) Identification... acetabular and cement restrictor mesh used during orthopedic surgery. (b) Classification. Class II....

  11. 21 CFR 878.3300 - Surgical mesh.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... GENERAL AND PLASTIC SURGERY DEVICES Prosthetic Devices § 878.3300 Surgical mesh. (a) Identification... acetabular and cement restrictor mesh used during orthopedic surgery. (b) Classification. Class II....

  12. Streaming Compression of Hexahedral Meshes

    SciTech Connect

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  13. Adaptive remeshing method in 2D based on refinement and coarsening techniques

    NASA Astrophysics Data System (ADS)

    Giraud-Moreau, L.; Borouchaki, H.; Cherouat, A.

    2007-04-01

    The analysis of mechanical structures using the Finite Element Method, in the framework of large elastoplastic strains, needs frequent remeshing of the deformed domain during computation. Remeshing is necessary for two main reasons, the large geometric distortion of finite elements and the adaptation of the mesh size to the physical behavior of the solution. This paper presents an adaptive remeshing method to remesh a mechanical structure in two dimensions subjected to large elastoplastic deformations with damage. The proposed remeshing technique includes adaptive refinement and coarsening procedures, based on geometrical and physical criteria. The proposed method has been integrated in a computational environment using the ABAQUS solver. Numerical examples show the efficiency of the proposed approach.

  14. Advances in the development of wire mesh reactor for coal gasification studies.

    PubMed

    Zeng, Cai; Chen, Lei; Liu, Gang; Li, Wenhua; Huang, Baoming; Zhu, Hongdong; Zhang, Bing; Zamansky, Vladimir

    2008-08-01

    In an effort to further understand the coal gasification behavior in entrained-flow gasifiers, a high pressure and high temperature wire mesh reactor with new features was recently built. An advanced LABVIEW-based temperature measurement and control system were adapted. Molybdenum wire mesh with aperture smaller than 70 mum and type D thermocouple were used to enable high carbon conversion (>90%) at temperatures >1000 degrees C. Gaseous species from wire mesh reactor were quantified using a high sensitivity gas chromatography. The material balance of coal pyrolysis in wire mesh reactor was demonstrated for the first time by improving the volatile's quantification techniques.

  15. Advances in the development of wire mesh reactor for coal gasification studies - article no. 084102

    SciTech Connect

    Zeng, C.; Chen, L.; Liu, G.; Li, W.H.; Huang, B.M.; Zhu, H.D.; Zhang, B.; Zamansky, V.

    2008-08-15

    In an effort to further understand the coal gasification behavior in entrained-flow gasifiers, a high pressure and high temperature wire mesh reactor with new features was recently built. An advanced LABVIEW-based temperature measurement and control system were adapted. Molybdenum wire mesh with aperture smaller than 70 {mu} m and type D thermocouple were used to enable high carbon conversion ({gt}90%) at temperatures {gt}1000 {sup o}C. Gaseous species from wire mesh reactor were quantified using a high sensitivity gas chromatography. The material balance of coal pyrolysis in wire mesh reactor was demonstrated for the first time by improving the volatile's quantification techniques.

  16. Automatic processing of an orientation map into a finite element mesh that conforms to grain boundaries

    NASA Astrophysics Data System (ADS)

    Dancette, S.; Browet, A.; Martin, G.; Willemet, M.; Delannay, L.

    2016-06-01

    A new procedure for microstructure-based finite element modeling of polycrystalline aggregates is presented. The proposed method relies (i) on an efficient graph-based community detection algorithm for crystallographic data segmentation and feature contour extraction and (ii) on the generation of selectively refined meshes conforming to grain boundaries. It constitutes a versatile and close to automatic environment for meshing complex microstructures. The procedure is illustrated with polycrystal microstructures characterized by orientation imaging microscopy. Hot deformation of a Duplex stainless steel is investigated based on ex-situ EBSD measurements performed on the same region of interest before and after deformation. A finite element mesh representing the initial microstructure is generated and then used in a crystal plasticity simulation of the plane strain compression. Simulation results and experiments are in relatively good agreement, confirming a large potential for such directly coupled experimental and modeling analyses, which is facilitated by the present image-based meshing procedure.

  17. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  18. Nanowire mesh solar fuels generator

    DOEpatents

    Yang, Peidong; Chan, Candace; Sun, Jianwei; Liu, Bin

    2016-05-24

    This disclosure provides systems, methods, and apparatus related to a nanowire mesh solar fuels generator. In one aspect, a nanowire mesh solar fuels generator includes (1) a photoanode configured to perform water oxidation and (2) a photocathode configured to perform water reduction. The photocathode is in electrical contact with the photoanode. The photoanode may include a high surface area network of photoanode nanowires. The photocathode may include a high surface area network of photocathode nanowires. In some embodiments, the nanowire mesh solar fuels generator may include an ion conductive polymer infiltrating the photoanode and the photocathode in the region where the photocathode is in electrical contact with the photoanode.

  19. An unstructured-mesh atmospheric model for nonhydrostatic dynamics: Towards optimal mesh resolution

    NASA Astrophysics Data System (ADS)

    Szmelter, Joanna; Zhang, Zhao; Smolarkiewicz, Piotr K.

    2015-08-01

    The paper advances the limited-area anelastic model (Smolarkiewicz et al. (2013) [45]) for investigation of nonhydrostatic dynamics in mesoscale atmospheric flows. New developments include the extension to a tetrahedral-based median-dual option for unstructured meshes and a static mesh adaptivity technique using an error indicator based on inherent properties of the Multidimensional Positive Definite Advection Transport Algorithm (MPDATA). The model employs semi-implicit nonoscillatory forward-in-time integrators for soundproof PDEs, built on MPDATA and a robust non-symmetric Krylov-subspace elliptic solver. Finite-volume spatial discretisation adopts an edge-based data structure. Simulations of stratified orographic flows and the associated gravity-wave phenomena in media with uniform and variable dispersive properties verify the advancement and demonstrate the potential of heterogeneous anisotropic discretisation with large variation in spatial resolution for study of complex stratified flows that can be computationally unattainable with regular grids.

  20. Toward Interoperable Mesh, Geometry and Field Components for PDE Simulation Development

    SciTech Connect

    Chand, K K; Diachin, L F; Li, X; Ollivier-Gooch, C; Seol, E S; Shephard, M; Tautges, T; Trease, H

    2005-07-11

    Mesh-based PDE simulation codes are becoming increasingly sophisticated and rely on advanced meshing and discretization tools. Unfortunately, it is still difficult to interchange or interoperate tools developed by different communities to experiment with various technologies or to develop new capabilities. To address these difficulties, we have developed component interfaces designed to support the information flow of mesh-based PDE simulations. We describe this information flow and discuss typical roles and services provided by the geometry, mesh, and field components of the simulation. Based on this delineation for the roles of each component, we give a high-level description of the abstract data model and set of interfaces developed by the Department of Energy's Interoperable Tools for Advanced Petascale Simulation (ITAPS) center. These common interfaces are critical to our interoperability goal, and we give examples of several services based upon these interfaces including mesh adaptation and mesh improvement.

  1. Managing chronic pelvic pain following reconstructive pelvic surgery with transvaginal mesh.

    PubMed

    Gyang, Anthony N; Feranec, Jessica B; Patel, Rakesh C; Lamvu, Georgine M

    2014-03-01

    In 2001, the US Food and Drug Administration (FDA) approved the first transvaginal mesh kit to treat pelvic organ prolapse (POP). Since the introduction of vaginal mesh kits, some vaginal meshes have been associated with chronic pelvic pain after reconstructive pelvic floor surgery. Pelvic pain results in between 0 % and 30 % of patients following transvaginal mesh placement. Common causes of chronic pelvic pain include pelvic floor muscle spasm, pudendal neuralgia, and infection. Paucity of data exists on the effective management of chronic pelvic pain after pelvic reconstructive surgery with mesh. We outline the management of chronic pelvic pain after transvaginal mesh placement for reconstructive pelvic floor repair based on our clinical experience and adaptation of data used in other aspects of managing chronic pelvic pain conditions.

  2. Exact mesh shape design of large cable-network antenna reflectors with flexible ring truss supports

    NASA Astrophysics Data System (ADS)

    Liu, Wang; Li, Dong-Xu; Yu, Xin-Zhan; Jiang, Jian-Ping

    2014-04-01

    An exact-designed mesh shape with favorable surface accuracy is of practical significance to the performance of large cable-network antenna reflectors. In this study, a novel design approach that could guide the generation of exact spatial parabolic mesh configurations of such reflector was proposed. By incorporating the traditional force density method with the standard finite element method, this proposed approach had taken the deformation effects of flexible ring truss supports into consideration, and searched for the desired mesh shapes that can satisfy the requirement that all the free nodes are exactly located on the objective paraboloid. Compared with the conventional design method, a remarkable improvement of surface accuracy in the obtained mesh shapes had been demonstrated by numerical examples. The present work would provide a helpful technical reference for the mesh shape design of such cable-network antenna reflector in engineering practice. [Figure not available: see fulltext.

  3. Gradient scaling for nonuniform meshes

    SciTech Connect

    Margolin, L.G.; Ruppel, H.M.; Demuth, R.B.

    1985-01-01

    This paper is concerned with the effect of nonuniform meshes on the accuracy of finite-difference calculations of fluid flow. In particular, when a simple shock propagates through a nonuniform mesh, one may fail to model the jump conditions across the shock even when the equations are differenced in manifestly conservative fashion. We develop an approximate dispersion analysis of the numerical equations and identify the source of the mesh dependency with the form of the artificial viscosity. We then derive an algebraic correction to the numerical equations - a scaling factor for the pressure gradient - to essentially eliminate the mesh dependency. We present several calculations to illustrate our theory. We conclude with an alternate interpretation of our results. 14 refs., 5 figs.

  4. Unstructured mesh methods for CFD

    NASA Technical Reports Server (NTRS)

    Peraire, J.; Morgan, K.; Peiro, J.

    1990-01-01

    Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.

  5. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  6. Method and system for mesh network embedded devices

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  7. User Manual for the PROTEUS Mesh Tools

    SciTech Connect

    Smith, Micheal A.; Shemon, Emily R.

    2015-06-01

    This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.

  8. The finite cell method for polygonal meshes: poly-FCM

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2016-10-01

    In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.

  9. Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Baker, John G.; Van Meter, James R.

    2005-01-01

    A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.

  10. Single port laparoscopic mesh rectopexy

    PubMed Central

    2016-01-01

    Introduction Traditionally, laparoscopic mesh rectopexy is performed with four ports, in an attempt to improve cosmetic results. Following laparoscopic mesh rectopexy there is a new operative technique called single-port laparoscopic mesh rectopexy. Aim To evaluate the single-port laparoscopic mesh rectopexy technique in control of rectal prolapse and the cosmesis and body image issues of this technique. Material and methods The study was conducted in El Fayoum University Hospital between July 2013 and November 2014 in elective surgery for symptomatic rectal prolapse with single-port laparoscopic mesh rectopexy on 10 patients. Results The study included 10 patients: 3 (30%) males and 7 (70%) females. Their ages ranged between 19 years and 60 years (mean: 40.3 ±6 years), and they all underwent laparoscopic mesh rectopexy. There were no conversions to open technique, nor injuries to the rectum or bowel, and there were no mortalities. Mean operative time was 120 min (range: 90–150 min), and mean hospital stay was 2 days (range: 1–3 days). Preoperatively, incontinence was seen in 5 (50%) patients and constipation in 4 (40%). Postoperatively, improvement in these symptoms was seen in 3 (60%) patients for incontinence and in 3 (75%) for constipation. Follow-up was done for 6 months and no recurrence was found with better cosmetic appearance for all patients. Conclusions Single-port laparoscopic mesh rectopexy is a safe procedure with good results as regards operative time, improvement in bowel function, morbidity, cost, and recurrence, and with better cosmetic appearance. PMID:27350840

  11. Robust and efficient overset grid assembly for partitioned unstructured meshes

    SciTech Connect

    Roget, Beatrice Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning. Another challenge arises because of the large variation in the type of mesh-block overlap and the resulting large load imbalance on multiple processors. Desirable traits for the grid assembly method are efficiency (requiring only a small fraction of the solver time), robustness (correct identification of all point types), and full automation (no user input required other than the mesh system). Additionally, the method should be scalable, which is an important challenge due to the inherent load imbalance. This paper describes a fully-automated grid assembly method, which can use two different donor search algorithms. One is based on the use of auxiliary grids and Exact Inverse Maps (EIM), and the other is based on the use of Alternating Digital Trees (ADT). The EIM method is demonstrated to be more efficient than the ADT method, while retaining robustness. An adaptive load re-balance algorithm is also designed and implemented, which considerably improves the scalability of the method.

  12. Cache-oblivious mesh layouts

    SciTech Connect

    Yoon, Sung-Eui; Lindstrom, Peter; Pascucci, Valerio; Manocha, Dinesh

    2005-07-01

    We present a novel method for computing cache-oblivious layouts of large meshes that improve the performance of interactive visualization and geometric processing algorithms. Given that the mesh is accessed in a reasonably coherent manner, we assume no particular data access patterns or cache parameters of the memory hierarchy involved in the computation. Furthermore, our formulation extends directly to computing layouts of multi-resolution and bounding volume hierarchies of large meshes. We develop a simple and practical cache-oblivious metric for estimating cache misses. Computing a coherent mesh layout is reduced to a combinatorial optimization problem. We designed and implemented an out-of-core multilevel minimization algorithm and tested its performance on unstructured meshes composed of tens to hundreds of millions of triangles. Our layouts can significantly reduce the number of cache misses. We have observed 2-20 times speedups in view-dependent rendering, collision detection, and isocontour extraction without any modification of the algorithms or runtime applications.

  13. Design, fabrication and characterization of high-stroke high-aspect ratio micro electro mechanical systems deformable mirrors for adaptive optics

    NASA Astrophysics Data System (ADS)

    Fernandez Rocha, Bautista

    Adaptive optic (AO) systems for next generation of extremely large telescopes (30--50 meter diameter primary mirrors) require high-stroke (10 microns), high-order (100x100) deformable mirrors at lower-cost than current technology. The required specifications are achievable with Micro Electro Mechanical Systems (MEMS) devices fabricated with high-aspect ratio processing techniques. This dissertation will review simulation results compared with displacement measurements of actuators utilizing a white-light interferometer. It will also review different actuator designs, materials and post-processing procedures fabricated in three different high-aspect ratio processes, Microfabrica's Electrochemical Fabrication (EFAB(TM)), HT-Micro's Precision Fabrication Technology (HTPF(TM)), and Innovative Micro Technologies (IMT) fabrication process. These manufacturing processes allow high-precision multilayer fabrication and their sacrificial layer thicknesses can be specified by the designer, rather than by constraints of the fabrication process. Various types of high-stroke gold actuators for AO consisting of folded springs with rectangular and circular membranes as well as X-beam actuators supported diagonally by beams were designed, simulated, fabricated, and tested individually and as part of a continuous facesheet DM system. The design, modeling and simulation of these actuators are compared to experimental measurements of their pull-in voltages, which characterizes their stiffness and maximum stroke. Vertical parallel plate ganged actuators fabricated with the EFAB(TM) process have a calculated pull-in voltage of 95V for a 600mum size device. In contrast, the pull-in voltages for the comb-drive actuators ranged from 55V for the large actuator, to 203V for the smallest actuator. Simulations and interferometer scans of actuator designs fabricated with HT-Micro's Precision Fabrication (HTPF(TM)) two wafer bonded process with different spring supports have shown the ability of

  14. Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko; Han, Min; Yeom, Yeon; Kim, Chan; Brown, Justin; Bolch, Wesley

    2017-04-04

    A new function to treat tetrahedral-mesh geometry was implemented in the Particle and Heavy Ion Transport code Systems (PHITS). To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.

  15. Omental Lipid-Coated Mesh

    DTIC Science & Technology

    2011-06-16

    infection. If benefit is proven, this method will be a cost- effective way to prepare biologic and possibly synthetic meshes for use in hernia repair...omental coating is encouraging. 10. ::’UD./CI.I I CI’IIVI::’ Omentum, Mesh , Hernia 𔃺. ::.CI.Utill , I.LA::’::’II"II.A IIUN UI": I-­ a -.-I’I-c­...-u...abdominal wall hernia repair. If cheap and effective promotion of neovascularization could be initiated, we might be able to improve upon current

  16. Physics-based deformable organisms for medical image analysis

    NASA Astrophysics Data System (ADS)

    Hamarneh, Ghassan; McIntosh, Chris

    2005-04-01

    Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.

  17. Multiscale mesh generation on the sphere

    NASA Astrophysics Data System (ADS)

    Lambrechts, Jonathan; Comblen, Richard; Legat, Vincent; Geuzaine, Christophe; Remacle, Jean-François

    2008-12-01

    A method for generating computational meshes for applications in ocean modeling is presented. The method uses a standard engineering approach for describing the geometry of the domain that requires meshing. The underlying sphere is parametrized using stereographic coordinates. Then, coastlines are described with cubic splines drawn in the stereographic parametric space. The mesh generation algorithm builds the mesh in the parametric plane using available techniques. The method enables to import coastlines from different data sets and, consequently, to build meshes of domains with highly variable length scales. The results include meshes together with numerical simulations of various kinds.

  18. Influence of cell shape on mechanical properties of Ti-6Al-4V meshes fabricated by electron beam melting method.

    PubMed

    Li, S J; Xu, Q S; Wang, Z; Hou, W T; Hao, Y L; Yang, R; Murr, L E

    2014-10-01

    Ti-6Al-4V reticulated meshes with different elements (cubic, G7 and rhombic dodecahedron) in Materialise software were fabricated by additive manufacturing using the electron beam melting (EBM) method, and the effects of cell shape on the mechanical properties of these samples were studied. The results showed that these cellular structures with porosities of 88-58% had compressive strength and elastic modulus in the range 10-300MPa and 0.5-15GPa, respectively. The compressive strength and deformation behavior of these meshes were determined by the coupling of the buckling and bending deformation of struts. Meshes that were dominated by buckling deformation showed relatively high collapse strength and were prone to exhibit brittle characteristics in their stress-strain curves. For meshes dominated by bending deformation, the elastic deformation corresponded well to the Gibson-Ashby model. By enhancing the effect of bending deformation, the stress-strain curve characteristics can change from brittle to ductile (the smooth plateau area). Therefore, Ti-6Al-4V cellular solids with high strength, low modulus and desirable deformation behavior could be fabricated through the cell shape design using the EBM technique.

  19. Hybrid Mesh for Nasal Airflow Studies

    PubMed Central

    Zubair, Mohammed; Abdullah, Mohammed Zulkifly; Ahmad, Kamarul Arifin

    2013-01-01

    The accuracy of the numerical result is closely related to mesh density as well as its distribution. Mesh plays a very significant role in the outcome of numerical simulation. Many nasal airflow studies have employed unstructured mesh and more recently hybrid mesh scheme has been utilized considering the complexity of anatomical architecture. The objective of this study is to compare the results of hybrid mesh with unstructured mesh and study its effect on the flow parameters inside the nasal cavity. A three-dimensional nasal cavity model is reconstructed based on computed tomographic images of a healthy Malaysian adult nose. Navier-Stokes equation for steady airflow is solved numerically to examine inspiratory nasal flow. The pressure drop obtained using the unstructured computational grid is about 22.6 Pa for a flow rate of 20 L/min, whereas the hybrid mesh resulted in 17.8 Pa for the same flow rate. The maximum velocity obtained at the nasal valve using unstructured grid is 4.18 m/s and that with hybrid mesh is around 4.76 m/s. Hybrid mesh reported lower grid convergence index (GCI) than the unstructured mesh. Significant differences between unstructured mesh and hybrid mesh are determined highlighting the usefulness of hybrid mesh for nasal airflow studies. PMID:23983811

  20. Multigrid for refined triangle meshes

    SciTech Connect

    Shapira, Yair

    1997-02-01

    A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.