Parallel Adaptive Mesh Refinement
Diachin, L; Hornung, R; Plassmann, P; WIssink, A
2005-03-04
As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the
Issues in adaptive mesh refinement
Dai, William Wenlong
2009-01-01
In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.
Adaptive Mesh Refinement in CTH
Crawford, David
1999-05-04
This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.
Parallel Adaptive Mesh Refinement Library
NASA Technical Reports Server (NTRS)
Mac-Neice, Peter; Olson, Kevin
2005-01-01
Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.
Adaptive mesh refinement in titanium
Colella, Phillip; Wen, Tong
2005-01-21
In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.
Adaptive Mesh Refinement for Microelectronic Device Design
NASA Technical Reports Server (NTRS)
Cwik, Tom; Lou, John; Norton, Charles
1999-01-01
Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).
Adaptive Hybrid Mesh Refinement for Multiphysics Applications
Khamayseh, Ahmed K; de Almeida, Valmor F
2007-01-01
The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to maximize the equidistribution of a weighted geometric and/or solution function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate forecasting. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.
Adaptive mesh refinement for storm surge
NASA Astrophysics Data System (ADS)
Mandli, Kyle T.; Dawson, Clint N.
2014-03-01
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GEOCLAW framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.
Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement
2009-09-29
This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less
Parallel object-oriented adaptive mesh refinement
Balsara, D.; Quinlan, D.J.
1997-04-01
In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.
GRChombo: Numerical relativity with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran
2015-12-01
In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Efficiency considerations in triangular adaptive mesh refinement.
Behrens, Jörn; Bader, Michael
2009-11-28
Locally or adaptively refined meshes have been successfully applied to simulation applications involving multi-scale phenomena in the geosciences. In particular, for situations with complex geometries or domain boundaries, meshes with triangular or tetrahedral cells demonstrate their superior ability to accurately represent relevant realistic features. On the other hand, these methods require more complex data structures and are therefore less easily implemented, maintained and optimized. Acceptance in the Earth-system modelling community is still low. One of the major drawbacks is posed by indirect addressing due to unstructured or dynamically changing data structures and correspondingly lower efficiency of the related computations. In this paper, we will derive several strategies to circumvent the mentioned efficiency constraint. In particular, we will apply recent computational sciences methods in combination with results of classical mathematics (space-filling curves) in order to linearize the complex data and access structure.
Visualization of adaptive mesh refinement data
NASA Astrophysics Data System (ADS)
Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Joy, Kenneth I.; Ligocki, Terry J.; Ma, Kwan-Liu; Shalf, John M.
2001-05-01
The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.
Adaptive mesh refinement techniques for electrical impedance tomography.
Molinari, M; Cox, S J; Blott, B H; Daniell, G J
2001-02-01
Adaptive mesh refinement techniques can be applied to increase the efficiency of electrical impedance tomography reconstruction algorithms by reducing computational and storage cost as well as providing problem-dependent solution structures. A self-adaptive refinement algorithm based on an a posteriori error estimate has been developed and its results are shown in comparison with uniform mesh refinement for a simple head model.
COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO
Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai
2010-02-01
In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.
Visualization of Scalar Adaptive Mesh Refinement Data
VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-12-06
Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.
Visualization Tools for Adaptive Mesh Refinement Data
Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes
2007-05-09
Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.
Adaptive Mesh Refinement Simulations of Relativistic Binaries
NASA Astrophysics Data System (ADS)
Motl, Patrick M.; Anderson, M.; Lehner, L.; Olabarrieta, I.; Tohline, J. E.; Liebling, S. L.; Rahman, T.; Hirschman, E.; Neilsen, D.
2006-09-01
We present recent results from our efforts to evolve relativistic binaries composed of compact objects. We simultaneously solve the general relativistic hydrodynamics equations to evolve the material components of the binary and Einstein's equations to evolve the space-time. These two codes are coupled through an adaptive mesh refinement driver (had). One of the ultimate goals of this project is to address the merger of a neutron star and black hole and assess the possible observational signature of such systems as gamma ray bursts. This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311 and in part through NASA's ATP program grant NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043 and at LSU's Center for Computation & Technology.
Elliptic Solvers for Adaptive Mesh Refinement Grids
Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.
1999-06-03
We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.
A parallel adaptive mesh refinement algorithm
NASA Technical Reports Server (NTRS)
Quirk, James J.; Hanebutte, Ulf R.
1993-01-01
Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.
Parallel adaptive mesh refinement for electronic structure calculations
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1996-12-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.
Structured Adaptive Mesh Refinement Application Infrastructure
2010-07-15
SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.
Adaptive mesh refinement for stochastic reaction-diffusion processes
Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros
2011-01-01
We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.
PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles
1999-01-01
In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.
Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes
Parsons, I D; Solberg, J M
2006-02-03
This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.
Adaptive mesh refinement techniques for 3-D skin electrode modeling.
Sawicki, Bartosz; Okoniewski, Michal
2010-03-01
In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process.
Adaptive mesh refinement for shocks and material interfaces
Dai, William Wenlong
2010-01-01
There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.
Projection of Discontinuous Galerkin Variable Distributions During Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Ballesteros, Carlos; Herrmann, Marcus
2012-11-01
Adaptive mesh refinement (AMR) methods decrease the computational expense of CFD simulations by increasing the density of solution cells only in areas of the computational domain that are of interest in that particular simulation. In particular, unstructured Cartesian AMR has several advantages over other AMR approaches, as it does not require the creation of numerous guard-cell blocks, neighboring cell lookups become straightforward, and the hexahedral nature of the mesh cells greatly simplifies the refinement and coarsening operations. The h-refinement from this AMR approach can be leveraged by making use of highly-accurate, but computationally costly methods, such as the Discontinuous Galerkin (DG) numerical method. DG methods are capable of high orders of accuracy while retaining stencil locality--a property critical to AMR using unstructured meshes. However, the use of DG methods with AMR requires the use of special flux and projection operators during refinement and coarsening operations in order to retain the high order of accuracy. The flux and projection operators needed for refinement and coarsening of unstructured Cartesian adaptive meshes using Legendre polynomial test functions will be discussed, and their performance will be shown using standard test cases.
Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries
Phillip, B.
2000-07-24
Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.
AMR++: Object-Oriented Parallel Adaptive Mesh Refinement
Quinlan, D.; Philip, B.
2000-02-02
Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.
Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units
Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.
2014-11-17
Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.
Adaptive mesh refinement for 1-dimensional gas dynamics
Hedstrom, G.; Rodrigue, G.; Berger, M.; Oliger, J.
1982-01-01
We consider the solution of the one-dimensional equation of gas-dynamics. Accurate numerical solutions are difficult to obtain on a given spatial mesh because of the existence of physical regions where components of the exact solution are either discontinuous or have large gradient changes. Numerical methods treat these phenomena in a variety of ways. In this paper, the method of adaptive mesh refinement is used. A thorough description of this method for general hyperbolic systems is given elsewhere and only properties of the method pertinent to the system are elaborated.
A fourth order accurate adaptive mesh refinement method forpoisson's equation
Barad, Michael; Colella, Phillip
2004-08-20
We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.
Optimal imaging with adaptive mesh refinement in electrical impedance tomography.
Molinari, Marc; Blott, Barry H; Cox, Simon J; Daniell, Geoffrey J
2002-02-01
In non-linear electrical impedance tomography the goodness of fit of the trial images is assessed by the well-established statistical chi2 criterion applied to the measured and predicted datasets. Further selection from the range of images that fit the data is effected by imposing an explicit constraint on the form of the image, such as the minimization of the image gradients. In particular, the logarithm of the image gradients is chosen so that conductive and resistive deviations are treated in the same way. In this paper we introduce the idea of adaptive mesh refinement to the 2D problem so that the local scale of the mesh is always matched to the scale of the image structures. This improves the reconstruction resolution so that the image constraint adopted dominates and is not perturbed by the mesh discretization. The avoidance of unnecessary mesh elements optimizes the speed of reconstruction without degrading the resulting images. Starting with a mesh scale length of the order of the electrode separation it is shown that, for data obtained at presently achievable signal-to-noise ratios of 60 to 80 dB, one or two refinement stages are sufficient to generate high quality images.
A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.
Ward, R. C.; Baker, R. S.; Morel, J. E.
2005-01-01
A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.
Block-structured adaptive mesh refinement - theory, implementation and application
Deiterding, Ralf
2011-01-01
Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.
N-Body Code with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Yahagi, Hideki; Yoshii, Yuzuru
2001-09-01
We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.
Divergence-Free Adaptive Mesh Refinement for Magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Balsara, Dinshaw S.
2001-12-01
Several physical systems, such as nonrelativistic and relativistic magnetohydrodynamics (MHD), radiation MHD, electromagnetics, and incompressible hydrodynamics, satisfy Stoke's law type equations for the divergence-free evolution of vector fields. In this paper we present a full-fledged scheme for the second-order accurate, divergence-free evolution of vector fields on an adaptive mesh refinement (AMR) hierarchy. We focus here on adaptive mesh MHD. However, the scheme has applicability to the other systems of equations mentioned above. The scheme is based on making a significant advance in the divergence-free reconstruction of vector fields. In that sense, it complements the earlier work of D. S. Balsara and D. S. Spicer (1999, J. Comput. Phys. 7, 270) where we discussed the divergence-free time-update of vector fields which satisfy Stoke's law type evolution equations. Our advance in divergence-free reconstruction of vector fields is such that it reduces to the total variation diminishing (TVD) property for one-dimensional evolution and yet goes beyond it in multiple dimensions. For that reason, it is extremely suitable for the construction of higher order Godunov schemes for MHD. Both the two-dimensional and three-dimensional reconstruction strategies are developed. A slight extension of the divergence-free reconstruction procedure yields a divergence-free prolongation strategy for prolonging magnetic fields on AMR hierarchies. Divergence-free restriction is also discussed. Because our work is based on an integral formulation, divergence-free restriction and prolongation can be carried out on AMR meshes with any integral refinement ratio, though we specialize the expressions for the most popular situation where the refinement ratio is two. Furthermore, we pay attention to the fact that in order to efficiently evolve the MHD equations on AMR hierarchies, the refined meshes must evolve in time with time steps that are a fraction of their parent mesh's time step
Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Leng, W.; Zhong, S.
2008-12-01
In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics
NASA Astrophysics Data System (ADS)
Plewa, T.; Müller, E.
2001-08-01
Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.
Computational relativistic astrophysics with adaptive mesh refinement: Testbeds
Evans, Edwin; Iyer, Sai; Tao Jian; Wolfmeyer, Randy; Zhang Huimin; Schnetter, Erik; Suen, Wai-Mo
2005-04-15
We have carried out numerical simulations of strongly gravitating systems based on the Einstein equations coupled to the relativistic hydrodynamic equations using adaptive mesh refinement (AMR) techniques. We show AMR simulations of NS binary inspiral and coalescence carried out on a workstation having an accuracy equivalent to that of a 1025{sup 3} regular unigrid simulation, which is, to the best of our knowledge, larger than all previous simulations of similar NS systems on supercomputers. We believe the capability opens new possibilities in general relativistic simulations.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
Structured adaptive mesh refinement on the connection machine
Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )
1993-01-01
Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.
An adaptive mesh refinement algorithm for the discrete ordinates method
Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.
1996-03-01
The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.
Edgel, Jared; Benzley, Steven E.; Owen, Steven James
2010-08-01
Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.
CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM
Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS
Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others
2014-04-01
This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.
Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications
NASA Astrophysics Data System (ADS)
Balsara, D.
2001-12-01
The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.
Production-quality Tools for Adaptive Mesh RefinementVisualization
Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes
2007-10-25
Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.
A Spectral Adaptive Mesh Refinement Method for the Burgers equation
NASA Astrophysics Data System (ADS)
Nasr Azadani, Leila; Staples, Anne
2013-03-01
Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.
3D Compressible Melt Transport with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Heister, Timo
2015-04-01
Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and
An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.
1993-01-01
We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.
Adaptive mesh refinement and adjoint methods in geophysics simulations
NASA Astrophysics Data System (ADS)
Burstedde, Carsten
2013-04-01
It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times
Using Adaptive Mesh Refinment to Simulate Storm Surge
NASA Astrophysics Data System (ADS)
Mandli, K. T.; Dawson, C.
2012-12-01
Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.
Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments
NASA Astrophysics Data System (ADS)
Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.
2015-12-01
Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.
Visualizing Geophysical Flow Problems with Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Sevre, E. O.; Yuen, D. A.; George, D. L.; Lee, S.
2011-12-01
Adaptive Mesh Refinement (AMR) is a technique used in software to decompose a computational domain based on the level of refinement necessary for spatial and temporal calculations. Comparing AMR runs to uniform grids allows for an unbounded gain in computational time. In this paper we will look at techniques for visualizing tsunami simulations that were run with AMR using the GeoClaw [Berger2011-1, Berger2011-2] software. Due to the computational efficiency of AMR we have decided to look into techniques for visualization of AMR data. By having good visualization tools for geoscientists more time can be spent interpreting results and analyzing data. Good visualization tools can be adapted easily to work with a variety of output formats, and the goal of this work is to provide a foundation for geoscientists to work with. In the past year GeoClaw has been used to model the 2011 Tohoku tsunami originating off the coast of Sendai Japan and delivering catastrophic damage to the Fukushima power plant. The aftermath of this single geologic event is still making headlines 4 months after the fact [Fackler2011]. GeoClaw utilizes the shallow water equations to model a variety of flows that range from tsunami to floods to landslides and debris flows [George2011]. With the advanced computations provided by AMR it is important for researchers to visualize and understand ways that are meaningful to both scientists and civilians affected by the potential outcomes of the computation. Special visualization techniques can be used to visualize and look at data generated with AMR. By incorporating these techniques into their software geoscientists will be able to harness powerful computational tools, such as GeoClaw, while also maintaining an informative view of their data.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
Lomov, I; Pember, R; Greenough, J; Liu, B
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.
Parallel adaptive mesh refinement techniques for plasticity problems
NASA Technical Reports Server (NTRS)
Barry, W. J.; Jones, M. T.; Plassmann, P. E.
1997-01-01
The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code
Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study
2005-06-06
The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.
Star formation with adaptive mesh refinement and magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Collins, David C.
2009-01-01
In this thesis, we develop an adaptive mesh refinement (AMR) code including magnetic fields, and use it to perform high resolution simulations of magnetized molecular clouds. The purpose of these simulations is to study present day star formation in the presence of turbulence and magnetic fields. We first present MHDEnzo, the extension of the cosmology and astrophysics code Enzo to include the effects magnetic fields. We use a higher order Godunov Riemann solver for the computation of interface fluxes; constrained transport to compute the electric field from those interface fluxes, which advances the induction equation in a divergence free manner; divergence free reconstruction technique to interpolate the magnetic fields to fine grids; operator splitting to include gravity and cosmological expansion. We present a series of test problems to demonstrate the quality of solution achieved. Additionally, we present several other solvers that were developed along the way. Finally we present the results from several AMR simulations that study isothermal turbulence in the presence of magnetic fields and self gravity. Ten simulations with initial Mach number 8.9 were studied varying several parameters; virial parameter a from 0.52 to 3.1; whether they were continuously stirred or allowed to decay; and the number of refinement levels (4 or 6). Measurements of the density probability density function (PDF) were made, showing both the expected log normal distribution and an additional power law. Measurements of the line of sight magnetic field vs. column density are done, giving excellent agreement with recent observations. The line width vs. size relationship is measured and compared with good agreement to observations, reproducing both turbulent and collapse signatures The core mass distribution is measured and agrees well with observations of Serpens and Perseus core samples, but the power-law distribution in Ophiuchus is not reproduced by our simulations. Finally we
Adaptive Multiresolution or Adaptive Mesh Refinement? A Case Study for 2D Euler Equations
Deiterding, Ralf; Domingues, Margarete O.; Gomes, Sonia M.; Roussel, Olivier; Schneider, Kai
2009-01-01
We present adaptive multiresolution (MR) computations of the two-dimensional compressible Euler equations for a classical Riemann problem. The results are then compared with respect to accuracy and computational efficiency, in terms of CPU time and memory requirements, with the corresponding finite volume scheme on a regular grid. For the same test-case, we also perform computations using adaptive mesh refinement (AMR) imposing similar accuracy requirements. The results thus obtained are compared in terms of computational overhead and compression of the computational grid, using in addition either local or global time stepping strategies. We preliminarily conclude that the multiresolution techniques yield improved memory compression and gain in CPU time with respect to the adaptive mesh refinement method.
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
FLY: a Tree Code for Adaptive Mesh Refinement
NASA Astrophysics Data System (ADS)
Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.
FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.
Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems
NASA Technical Reports Server (NTRS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-01-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Adaptive mesh refinement in curvilinear body-fitted grid systems
NASA Astrophysics Data System (ADS)
Steinthorsson, Erlendur; Modiano, David; Colella, Phillip
1995-10-01
To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.
Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions
Chen, Xiaodong; Yang, Vigor
2014-07-15
In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.
An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.
Li, Zhilin; Song, Peng
2012-01-01
An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy.
An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods
Li, Zhilin; Song, Peng
2012-01-01
An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155
Adaptive Mesh Refinement for High Accuracy Wall Loss Determination in Accelerating Cavity Design
Ge, L
2004-06-14
This paper presents the improvement in wall loss determination when adaptive mesh refinement (AMR) methods are used with the parallel finite element eigensolver Omega3P. We show that significant reduction in the number of degrees of freedom (DOFs) as well as a faster rate of convergence can be achieved as compared with results from uniform mesh refinement in determining cavity wall loss to a desired accuracy. Test cases for which measurements are available will be examined, and comparison with uniform refinement results will be discussed.
Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.
Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis
2005-12-01
This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.
Li, Zhilin; Song, Peng
2013-06-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method.
Adaptive mesh refinement techniques for the immersed interface method applied to flow problems
Li, Zhilin; Song, Peng
2013-01-01
In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763
Hornung, R.D.
1996-12-31
An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.
NASA Astrophysics Data System (ADS)
Anderson, Robert; Pember, Richard; Elliott, Noah
2001-11-01
We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.
Integration over two-dimensional Brillouin zones by adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Henk, J.
2001-07-01
Adaptive mesh-refinement (AMR) schemes for integration over two-dimensional Brillouin zones are presented and their properties are investigated in detail. A salient feature of these integration techniques is that the grid of sampling points is automatically adapted to the integrand in such a way that regions with high accuracy demand are sampled with high density, while the other regions are sampled with low density. This adaptation may save a sizable amount of computation time in comparison with those integration methods without mesh refinement. Several AMR schemes for one- and two-dimensional integration are introduced. As an application, the spin-dependent conductance of electronic tunneling through planar junctions is investigated and discussed with regard to Brillouin zone integration.
NASA Astrophysics Data System (ADS)
Leng, Wei; Zhong, Shijie
2011-04-01
Numerical modeling of mantle convection is challenging. Owing to the multiscale nature of mantle dynamics, high resolution is often required in localized regions, with coarser resolution being sufficient elsewhere. When investigating thermochemical mantle convection, high resolution is required to resolve sharp and often discontinuous boundaries between distinct chemical components. In this paper, we present a 2-D finite element code with adaptive mesh refinement techniques for simulating compressible thermochemical mantle convection. By comparing model predictions with a range of analytical and previously published benchmark solutions, we demonstrate the accuracy of our code. By refining and coarsening the mesh according to certain criteria and dynamically adjusting the number of particles in each element, our code can simulate such problems efficiently, dramatically reducing the computational requirements (in terms of memory and CPU time) when compared to a fixed, uniform mesh simulation. The resolving capabilities of the technique are further highlighted by examining plume-induced entrainment in a thermochemical mantle convection simulation.
Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.
2006-01-01
Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.
ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing
NASA Astrophysics Data System (ADS)
Wise, John H.; Abel, Tom
2011-07-01
We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.
Fukuda, Jun-ichi; Yoneya, Makoto; Yokoyama, Hiroshi
2002-04-01
We investigate numerically the structure of topological defects close to a spherical particle immersed in a uniformly aligned nematic liquid crystal. To this end we have implemented an adaptive mesh refinement scheme in an axi-symmetric three-dimensional system, which makes it feasible to take into account properly the large length scale difference between the particle and the topological defects. The adaptive mesh refinement scheme proves to be quite efficient and useful in the investigation of not only the macroscopic properties such as the defect position but also the fine structure of defects. It can be shown that a hyperbolic hedgehog that accompanies a particle with strong homeotropic anchoring takes the structure of a ring.
Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver
NASA Technical Reports Server (NTRS)
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2013-01-01
This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.
A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics
Puckett, E.G. ); Saltzman, J.S. )
1991-08-12
Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.
Adaptation of Block-Structured Adaptive Mesh Refinement to Particle-In-Cell simulations
NASA Astrophysics Data System (ADS)
Vay, Jean-Luc; Colella, Phillip; McCorquodale, Peter; Friedman, Alex; Grote, Dave
2001-10-01
Particle-In-Cell (PIC) methods which solve the Maxwell equations (or a simplification) on a regular Cartesian grid are routinely used to simulate plasma and particle beam systems. Several techniques have been developed to accommodate irregular boundaries and scale variations. We describe here an ongoing effort to adapt the block-structured Adaptive Mesh Refinement (AMR) algorithm (http://seesar.lbl.gov/AMR/) to the Particle-In-Cell method. The AMR technique connects grids having different resolutions, using interpolation. Special care has to be taken to avoid the introduction of spurious forces close to the boundary of the inner, high-resolution grid, or at least to reduce such forces to an acceptable level. The Berkeley AMR library CHOMBO has been modified and coupled to WARP3d (D.P. Grote et al., Fusion Engineering and Design), 32-33 (1996), 193-200, a PIC code which is used for the development of high current accelerators for Heavy Ion Fusion. The methods and preliminary results will be presented.
A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides
NASA Astrophysics Data System (ADS)
de la Asunción, Marc; Castro, Manuel J.
2016-04-01
In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.
Simulating Multi-scale Fluid Flows Using Adaptive Mesh Refinement Methods
NASA Astrophysics Data System (ADS)
Rowe, Kristopher; Lamb, Kevin
2015-11-01
When modelling flows with disparate length scales one must use a computational mesh that is fine enough to capture the smallest phenomena of interest. Traditional computational fluid dynamics models apply a mesh of uniform resolution to the entire computational domain; however, if the smallest scales of interest are isolated much of the computational resources used in these simulations will be wasted in regions where they are not needed. Adaptive mesh refinement methods seek to only apply resolution where it is needed. Beginning with a single coarse grid, a nested hierarchy of block structured grids is built in regions of the fluid flow where more resolution is necessary. As the fluid flow varies in time this hierarchy of grids is dynamically rebuilt to follow the phenomena of interest. Through the modelling of the interaction of vortices with wall boundary layers, it will be demonstrated that adaptive mesh refinement methods will produce equivalent results to traditional single resolution codes while using less processors, memory, and wall-clock time. Additionally, it is possible to model such flows to higher Reynolds numbers than have been feasible previously. This work was supported by NSERC and SHARCNET.
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
NASA Technical Reports Server (NTRS)
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-01-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
NASA Astrophysics Data System (ADS)
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-11-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
Practical improvements of multi-grid iteration for adaptive mesh refinement method
NASA Astrophysics Data System (ADS)
Miyashita, Hisashi; Yamada, Yoshiyuki
2005-03-01
Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.
NASA Astrophysics Data System (ADS)
Commerçon, B.; Debout, V.; Teyssier, R.
2014-03-01
Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.
Anderson, R W; Pember, R B; Elliott, N S
2001-10-22
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.
Yaqi Wang; Jean C. Ragusa
2011-02-01
Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.
A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction
Herrnstein, Aaron R.
2005-12-01
An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO_{2} concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No
NASA Astrophysics Data System (ADS)
den, M.; Yamashita, K.; Ogawa, T.
A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.
Conformal refinement of unstructured quadrilateral meshes
Garmella, Rao
2009-01-01
We present a multilevel adaptive refinement technique for unstructured quadrilateral meshes in which the mesh is kept conformal at all times. This means that the refined mesh, like the original, is formed of only quadrilateral elements that intersect strictly along edges or at vertices, i.e., vertices of one quadrilateral element do not lie in an edge of another quadrilateral. Elements are refined using templates based on 1:3 refinement of edges. We demonstrate that by careful design of the refinement and coarsening strategy, we can maintain high quality elements in the refined mesh. We demonstrate the method on a number of examples with dynamically changing refinement regions.
A new adaptive mesh refinement data structure with an application to detonation
NASA Astrophysics Data System (ADS)
Ji, Hua; Lien, Fue-Sang; Yee, Eugene
2010-11-01
A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.
GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS
Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong
2010-02-01
We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.
Etienne, Zachariah B.; Liu, Yuk Tung; Shapiro, Stuart L.
2010-10-15
We have written and tested a new general relativistic magnetohydrodynamics code, capable of evolving magnetohydrodynamics (MHD) fluids in dynamical spacetimes with adaptive-mesh refinement (AMR). Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the Baumgarte-Shapiro Shibata-Nakamura formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential, which exists on a grid that is staggered with respect to the hydrodynamic and metric variables. The divergenceless constraint {nabla}{center_dot}B=0 is enforced by the curl of the vector potential. Our MHD scheme is fully compatible with AMR, so that fluids at AMR refinement boundaries maintain {nabla}{center_dot}B=0. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We present code validation test results, both in Minkowski and curved spacetimes. They include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.
Cherry, Elizabeth M; Greenside, Henry S; Henriquez, Craig S
2003-09-01
A recently developed space-time adaptive mesh refinement algorithm (AMRA) for simulating isotropic one- and two-dimensional excitable media is generalized to simulate three-dimensional anisotropic media. The accuracy and efficiency of the algorithm is investigated for anisotropic and inhomogeneous 2D and 3D domains using the Luo-Rudy 1 (LR1) and FitzHugh-Nagumo models. For a propagating wave in a 3D slab of tissue with LR1 membrane kinetics and rotational anisotropy comparable to that found in the human heart, factors of 50 and 30 are found, respectively, for the speedup and for the savings in memory compared to an algorithm using a uniform space-time mesh at the finest resolution of the AMRA method. For anisotropic 2D and 3D media, we find no reduction in accuracy compared to a uniform space-time mesh. These results suggest that the AMRA will be able to simulate the 3D electrical dynamics of canine ventricles quantitatively for 1 s using 32 1-GHz Alpha processors in approximately 9 h.
Compact integration factor methods for complex domains and adaptive mesh refinement.
Liu, Xinfeng; Nie, Qing
2010-08-10
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.
Compact integration factor methods for complex domains and adaptive mesh refinement
Liu, Xinfeng; Nie, Qing
2010-01-01
Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883
A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model
Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A
2009-03-03
Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.
Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing
NASA Astrophysics Data System (ADS)
Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David
2011-10-01
We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.
Single-pass GPU-raycasting for structured adaptive mesh refinement data
NASA Astrophysics Data System (ADS)
Kaehler, Ralf; Abel, Tom
2013-01-01
Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.
Anderson, R W; Pember, R B; Elliot, N S
2000-09-26
A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.
Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow
Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.
2011-06-22
In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.
Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Ahmad, Jasim U.
2012-01-01
Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.
NASA Astrophysics Data System (ADS)
Zanotti, O.; Dumbser, M.; Fambri, F.
2016-05-01
We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.
3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks
R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin
2003-10-20
We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.
Dynamic Implicit 3D Adaptive Mesh Refinement for Non-Equilibrium Radiation Diffusion
Philip, Bobby; Wang, Zhen; Berrill, Mark A; Rodriguez Rodriguez, Manuel; Pernice, Michael
2014-01-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multiphysics systems: implicit time integration for efficient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent linear solver convergence.
On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields
Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.
2011-06-27
Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.
Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms
Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak
2006-01-31
Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.
Fakhari, Abbas; Lee, Taehun
2014-03-01
An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.
Henshaw, W; Schwendeman, D
2007-11-15
This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.
Development of a scalable gas-dynamics solver with adaptive mesh refinement
NASA Astrophysics Data System (ADS)
Korkut, Burak
There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be
NASA Astrophysics Data System (ADS)
Hatori, Tomoharu; Ito, Atsushi M.; Nunami, Masanori; Usui, Hideyuki; Miura, Hideaki
2016-08-01
We propose a numerical method to determine the artificial viscosity in magnetohydrodynamics (MHD) simulations with adaptive mesh refinement (AMR) method, where the artificial viscosity is adaptively changed due to the resolution level of the AMR hierarchy. Although the suitable value of the artificial viscosity depends on the governing equations and the model of target problem, it can be determined by von Neumann stability analysis. By means of the new method, "level-by-level artificial viscosity method," MHD simulations of Rayleigh-Taylor instability (RTI) are carried out with the AMR method. The validity of the level-by-level artificial viscosity method is confirmed by the comparison of the linear growth rates of RTI between the AMR simulations and the simple simulations with uniform grid and uniform artificial viscosity whose resolution is the same as that in the highest level of the AMR simulation. Moreover, in the nonlinear phase of RTI, the secondary instability is clearly observed where the hierarchical data structure of AMR calculation is visualized as high resolution region floats up like terraced fields. In the applications of the method to general fluid simulations, the growth of small structures can be sufficiently reproduced, while the divergence of numerical solutions can be suppressed.
Dynamically adaptive mesh refinement technique for image reconstruction in optical tomography.
Soloviev, Vadim Y; Krasnosselskaia, Lada V
2006-04-20
A novel adaptive mesh technique is introduced for problems of image reconstruction in luminescence optical tomography. A dynamical adaptation of the three-dimensional scheme based on the finite-volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. The arbitrary shape of the bounding surface is handled by an additional refinement of computational cells on the boundary. Dynamical shrinking of the search volume is introduced to improve computational performance and accuracy while locating the luminescence target. Light propagation in the medium is modeled by the telegraph equation, and the image-reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the introduced method are demonstrated for image reconstruction of one and two spherical luminescent objects embedded within a breastlike tissue phantom. Experimental measurements are simulated by the solution of the forward problem on a grid of 5x5 light guides attached to the surface of the phantom.
Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics
Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park
2007-04-02
Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.
Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao
2011-04-10
In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.
Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas
2011-03-15
The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.
Hummels, Cameron B.; Bryan, Greg L.
2012-04-20
We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a {Lambda}-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.
Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.
2012-01-20
We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s{sup -1} and 200 km, respectively), and we discuss the potential consequences for the first flames.
Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.
2011-07-10
Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid
NASA Astrophysics Data System (ADS)
Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua
2016-10-01
A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.
Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2012-01-01
Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.
Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams
Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.
2003-11-04
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.
Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers
NASA Technical Reports Server (NTRS)
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2014-01-01
This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
NASA Astrophysics Data System (ADS)
Rastigejev, Y.; Semakin, A. N.
2013-12-01
Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical
De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego
2012-02-20
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the
NASA Astrophysics Data System (ADS)
De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico
2012-02-01
We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.
Laser ray tracing in a parallel arbitrary Lagrangian-Eulerian adaptive mesh refinement hydrocode
NASA Astrophysics Data System (ADS)
Masters, N. D.; Kaiser, T. B.; Anderson, R. W.; Eder, D. C.; Fisher, A. C.; Koniges, A. E.
2010-08-01
ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray tracing in ALE-AMR. We present the basic concepts of laser ray tracing and our approach to efficiently traverse the adaptive mesh hierarchy.
Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode
Masters, N D; Kaiser, T B; Anderson, R W; Eder, D C; Fisher, A C; Koniges, A E
2009-09-28
ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.
NASA Astrophysics Data System (ADS)
Pantano, Carlos
2005-11-01
We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)
Ying, Wenjun; Henriquez, Craig S
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.
Ying, Wenjun; Henriquez, Craig S.
2015-01-01
A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455
NASA Astrophysics Data System (ADS)
Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.
We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.
Adaptive Mesh Refinement Cosmological Simulations of Cosmic Rays in Galaxy Clusters
NASA Astrophysics Data System (ADS)
Skillman, Samuel William
2013-12-01
Galaxy clusters are unique astrophysical laboratories that contain many thermal and non-thermal phenomena. In particular, they are hosts to cosmic shocks, which propagate through the intracluster medium as a by-product of structure formation. It is believed that at these shock fronts, magnetic field inhomogeneities in a compressing flow may lead to the acceleration of cosmic ray electrons and ions. These relativistic particles decay and radiate through a variety of mechanisms, and have observational signatures in radio, hard X-ray, and Gamma-ray wavelengths. We begin this dissertation by developing a method to find shocks in cosmological adaptive mesh refinement simulations of structure formation. After describing the evolution of shock properties through cosmic time, we make estimates for the amount of kinetic energy processed and the total number of cosmic ray protons that could be accelerated at these shocks. We then use this method of shock finding and a model for the acceleration of and radio synchrotron emission from cosmic ray electrons to estimate the radio emission properties in large scale structures. By examining the time-evolution of the radio emission with respect to the X-ray emission during a galaxy cluster merger, we find that the relative timing of the enhancements in each are important consequences of the shock dynamics. By calculating the radio emission expected from a given mass galaxy cluster, we make estimates for future large-area radio surveys. Next, we use a state-of-the-art magnetohydrodynamic simulation to follow the electron acceleration in a massive merging galaxy cluster. We use the magnetic field information to calculate not only the total radio emission, but also create radio polarization maps that are compared to recent observations. We find that we can naturally reproduce Mpc-scale radio emission that resemble many of the known double radio relic systems. Finally, motivated by our previous studies, we develop and introduce a
Patched based methods for adaptive mesh refinement solutions of partial differential equations
Saltzman, J.
1997-09-02
This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.
Masters, N D; Anderson, R W; Elliott, N S; Fisher, A C; Gunney, B T; Koniges, A E
2007-08-28
Modeling of high power laser and ignition facilities requires new techniques because of the higher energies and higher operational costs. We report on the development and application of a new interface reconstruction algorithm for chamber modeling code that combines ALE (Arbitrary Lagrangian Eulerian) techniques with AMR (Adaptive Mesh Refinement). The code is used for the simulation of complex target elements in the National Ignition Facility (NIF) and other similar facilities. The interface reconstruction scheme is required to adequately describe the debris/shrapnel (including fragments or droplets) resulting from energized materials that could affect optics or diagnostic sensors. Traditional ICF modeling codes that choose to implement ALE + AMR techniques will also benefit from this new scheme. The ALE formulation requires material interfaces (including those of generated particles or droplets) to be tracked. We present the interface reconstruction scheme developed for NIF's ALE-AMR and discuss how it is affected by adaptive mesh refinement and the ALE mesh. Results of the code are shown for NIF and OMEGA target configurations.
NASA Astrophysics Data System (ADS)
Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang
2015-10-01
For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.
NASA Astrophysics Data System (ADS)
Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.
2015-12-01
Complex multi-scale atmospheric phenomena such as tropical cyclones challenge the coarse uniform grids of convectional climate models. Adaptive mesh refinement (AMR) techniques seek to mitigate these problems by providing sufficiently high-resolution grid patches only over features of interests while limiting the computational burden of requiring such resolutions globally. One such model is the non-hydrostatic, finite-volume Chombo-AMR general circulation model (GCM), which implements refinement in both space and time on a cubed-sphere grid. The 2D shallow-water equations exhibit many of the complexities of 3D GCM dynamical cores and serve as an effective method for testing the dynamical core and the refinement strategies of adaptive atmospheric models. We implement a shallow-water test case consisting of a pair of interacting tropical cyclone-like vortices. Small changes in the initial conditions can lead to a variety of interactions that develop fine-scale spiral band structures and large-scale wave trains. We investigate the accuracy and efficiency of AMR's ability to capture and effectively follow the evolution of the vortices in time. These simulations serve to test the effectiveness of refinement for both static and dynamic grid configurations as well as the sensitivity of the model results to the refinement criteria.
Lombardini, Manuel; Deiterding, Ralf
2010-01-01
This paper presents the use of a dynamically adaptive mesh refinement strategy for the simulations of shock-driven turbulent mixing. Large-eddy simulations are necessary due the high Reynolds number turbulent regime. In this approach, the large scales are simulated directly and small scales at which the viscous dissipation occurs are modeled. A low-numerical centered finite-difference scheme is used in turbulent flow regions while a shock-capturing method is employed to capture shocks. Three-dimensional parallel simulations of the Richtmyer-Meshkov instability performed in plane and converging geometries are described.
NASA Astrophysics Data System (ADS)
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2007-01-01
We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.
NASA Astrophysics Data System (ADS)
Guo, Z.; Xiong, S. M.
2015-05-01
An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.
Fukuda, Jun-Ichi; Yoneya, Makoto; Yokoyama, Hiroshi
2004-01-01
We investigate the orientation profile and the structure of topological defects of a nematic liquid crystal around a spherical particle using an adaptive mesh refinement scheme developed by us previously. The previous work [J. Fukuda et al., Phys. Rev. E 65, 041709 (2002)] was devoted to the investigation of the fine structure of a hyperbolic hedgehog defect that the particle accompanies and in this paper we present the equilibrium profile of the Saturn ring configuration. The radius of the Saturn ring r(d) in units of the particle radius R(0) increases weakly with the increase of Epsilon, the ratio of the nematic coherence length to R(0). Next we discuss the energetic stability of a hedgehog and a Saturn ring. The use of adaptive mesh refinement scheme together with a tensor orientational order parameter Q (alpha, beta) allows us to calculate the elastic energy of a nematic liquid crystal without any assumption of the structure and the energy of the defect core as in the previous similar studies. The reduced free energy of a nematic liquid crystal, F= F/L1RO, with L(1) being the elastic constant, is almost independent of Epsilon in the hedgehog configuration, while it shows a logarithmic dependence in the Saturn ring configuration. This result clearly indicates that the energetic stability of a hedgehog to a Saturn ring for a large particle is definitely attributed to the large defect energy of the Saturn ring with a large radius.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.
2016-08-01
This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.
Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core
NASA Astrophysics Data System (ADS)
Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.
2009-12-01
One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.
Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.
2009-06-15
A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.
Multigrid for refined triangle meshes
Shapira, Yair
1997-02-01
A two-level preconditioning method for the solution of (locally) refined finite element schemes using triangle meshes is introduced. In the isotropic SPD case, it is shown that the condition number of the preconditioned stiffness matrix is bounded uniformly for all sufficiently regular triangulations. This is also verified numerically for an isotropic diffusion problem with highly discontinuous coefficients.
Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.
2013-04-10
We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2015-11-01
The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.
Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow
Henshaw, W D; Schwendeman, D W
2005-08-30
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.
NASA Astrophysics Data System (ADS)
Huang, Rongzong; Wu, Huiying
2016-06-01
A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.
Li, Pak Shing; Klein, Richard I.; Martin, Daniel F.; McKee, Christopher F. E-mail: klein@astron.berkeley.edu E-mail: cmckee@astro.berkeley.edu
2012-02-01
Performing a stable, long-duration simulation of driven MHD turbulence with a high thermal Mach number and a strong initial magnetic field is a challenge to high-order Godunov ideal MHD schemes because of the difficulty in guaranteeing positivity of the density and pressure. We have implemented a robust combination of reconstruction schemes, Riemann solvers, limiters, and constrained transport electromotive force averaging schemes that can meet this challenge, and using this strategy, we have developed a new adaptive mesh refinement (AMR) MHD module of the ORION2 code. We investigate the effects of AMR on several statistical properties of a turbulent ideal MHD system with a thermal Mach number of 10 and a plasma {beta}{sub 0} of 0.1 as initial conditions; our code is shown to be stable for simulations with higher Mach numbers (M{sub rms}= 17.3) and smaller plasma beta ({beta}{sub 0} = 0.0067) as well. Our results show that the quality of the turbulence simulation is generally related to the volume-averaged refinement. Our AMR simulations show that the turbulent dissipation coefficient for supersonic MHD turbulence is about 0.5, in agreement with unigrid simulations.
Constrained-Transport Magnetohydrodynamics with Adaptive-Mesh-Refinement in CHARM
Miniatii, Francesco; Martin, Daniel
2011-05-24
We present the implementation of a three-dimensional, second order accurate Godunov-type algorithm for magneto-hydrodynamic (MHD), in the adaptivemesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit Corner-Transport-Upwind (CTU) scheme. Thefluid quantities are cell-centered and are updated using the Piecewise-Parabolic- Method (PPM), while the magnetic field variables are face-centered and areevolved through application of the Stokes theorem on cell edges via a Constrained- Transport (CT) method. The so-called ?multidimensional MHD source terms?required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracyor robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These includeface-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. Thecode is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests,a three-dimensional shock-cloud interaction problem and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence isshown to remain negligible throughout. Subject headings: cosmology: theory - methods: numerical
Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.
2011-01-01
We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.
NASA Astrophysics Data System (ADS)
Power, C.; Read, J. I.; Hobbs, A.
2014-06-01
We simulate cosmological galaxy cluster formation using three different approaches to solving the equations of non-radiative hydrodynamics - classic smoothed particle hydrodynamics (SPH), novel SPH with a higher order dissipation switch (SPHS), and an adaptive mesh refinement (AMR) method. Comparing spherically averaged entropy profiles, we find that SPHS and AMR approaches result in a well-defined entropy core that converges rapidly with increasing mass and force resolution. In contrast, the central entropy profile in the SPH approach is sensitive to the cluster's assembly history and shows poor numerical convergence. We trace this disagreement to the known artificial surface tension in SPH that appears at phase boundaries. Varying systematically numerical dissipation in SPHS, we study the contributions of numerical and physical dissipation to the entropy core and argue that numerical dissipation is required to ensure single-valued fluid quantities in converging flows. However, provided it occurs only at the resolution limit and does not propagate errors to larger scales, its effect is benign - there is no requirement to build `sub-grid' models of unresolved turbulence for galaxy cluster simulations. We conclude that entropy cores in non-radiative galaxy cluster simulations are physical, resulting from entropy generation in shocked gas during cluster assembly.
NASA Astrophysics Data System (ADS)
Rasia, Elena; Lau, Erwin T.; Borgani, Stefano; Nagai, Daisuke; Dolag, Klaus; Avestruz, Camille; Granato, Gian Luigi; Mazzotta, Pasquale; Murante, Giuseppe; Nelson, Kaylea; Ragone-Figueroa, Cinthia
2014-08-01
Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R 500 the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.
Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea
2014-08-20
Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.
NASA Technical Reports Server (NTRS)
Combi, M. R.; Kabin, K.; Gombosi, T. I.; DeZeeuw, D. L.; Powell, K. G.
1998-01-01
The first results for applying a three-dimensional multimedia ideal MHD model for the mass-loaded flow of Jupiter's corotating magnetospheric plasma past Io are presented. The model is able to consider simultaneously physically realistic conditions for ion mass loading, ion-neutral drag, and intrinsic magnetic field in a full global calculation without imposing artificial dissipation. Io is modeled with an extended neutral atmosphere which loads the corotating plasma torus flow with mass, momentum, and energy. The governing equations are solved using adaptive mesh refinement on an unstructured Cartesian grid using an upwind scheme for AHMED. For the work described in this paper we explored a range of models without an intrinsic magnetic field for Io. We compare our results with particle and field measurements made during the December 7, 1995, flyby of to, as published by the Galileo Orbiter experiment teams. For two extreme cases of lower boundary conditions at Io, our model can quantitatively explain the variation of density along the spacecraft trajectory and can reproduce the general appearance of the variations of magnetic field and ion pressure and temperature. The net fresh ion mass-loading rates are in the range of approximately 300-650 kg/s, and equivalent charge exchange mass-loading rates are in the range approximately 540-1150 kg/s in the vicinity of Io.
woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement
NASA Astrophysics Data System (ADS)
Assmann, E.; Wissgott, P.; Kuneš, J.; Toschi, A.; Blaha, P.; Held, K.
2016-05-01
We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the WIEN2WANNIER framework and allows including a local many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package WIEN2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO3.
NASA Astrophysics Data System (ADS)
Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.
2006-09-01
This paper describes a hybrid finite-difference method for the large-eddy simulation of compressible flows with low-numerical dissipation and structured adaptive mesh refinement (SAMR). A conservative flux-based approach is described with an explicit centered scheme used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. Three-dimensional numerical simulations of a Richtmyer-Meshkov instability are presented.
Curved mesh generation and mesh refinement using Lagrangian solid mechanics
Persson, P.-O.; Peraire, J.
2008-12-31
We propose a method for generating well-shaped curved unstructured meshes using a nonlinear elasticity analogy. The geometry of the domain to be meshed is represented as an elastic solid. The undeformed geometry is the initial mesh of linear triangular or tetrahedral elements. The external loading results from prescribing a boundary displacement to be that of the curved geometry, and the final configuration is determined by solving for the equilibrium configuration. The deformations are represented using piecewise polynomials within each element of the original mesh. When the mesh is sufficiently fine to resolve the solid deformation, this method guarantees non-intersecting elements even for highly distorted or anisotropic initial meshes. We describe the method and the solution procedures, and we show a number of examples of two and three dimensional simplex meshes with curved boundaries. We also demonstrate how to use the technique for local refinement of non-curved meshes in the presence of curved boundaries.
Auto-adaptive finite element meshes
NASA Technical Reports Server (NTRS)
Richter, Roland; Leyland, Penelope
1995-01-01
Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.
Advanced numerical methods in mesh generation and mesh adaptation
Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A
2010-01-01
Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge
Guzik, S; McCorquodale, P; Colella, P
2011-12-16
A fourth-order accurate finite-volume method is presented for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Novel considerations for formulating the semi-discrete system of equations in computational space combined with detailed mechanisms for accommodating the adapting grids ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). Advancement in time is achieved with a fourth-order Runge-Kutta method.
Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.
1993-04-01
The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.
Controlling Reflections from Mesh Refinement Interfaces in Numerical Relativity
NASA Technical Reports Server (NTRS)
Baker, John G.; Van Meter, James R.
2005-01-01
A leading approach to improving the accuracy on numerical relativity simulations of black hole systems is through fixed or adaptive mesh refinement techniques. We describe a generic numerical error which manifests as slowly converging, artificial reflections from refinement boundaries in a broad class of mesh-refinement implementations, potentially limiting the effectiveness of mesh- refinement techniques for some numerical relativity applications. We elucidate this numerical effect by presenting a model problem which exhibits the phenomenon, but which is simple enough that its numerical error can be understood analytically. Our analysis shows that the effect is caused by variations in finite differencing error generated across low and high resolution regions, and that its slow convergence is caused by the presence of dramatic speed differences among propagation modes typical of 3+1 relativity. Lastly, we resolve the problem, presenting a class of finite-differencing stencil modifications which eliminate this pathology in both our model problem and in numerical relativity examples.
Unstructured mesh generation and adaptivity
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.
2005-04-25
We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.
Parallel tetrahedral mesh refinement with MOAB.
Thompson, David C.; Pebay, Philippe Pierre
2008-12-01
In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.
Performance of a streaming mesh refinement algorithm.
Thompson, David C.; Pebay, Philippe Pierre
2004-08-01
In SAND report 2004-1617, we outline a method for edge-based tetrahedral subdivision that does not rely on saving state or communication to produce compatible tetrahedralizations. This report analyzes the performance of the technique by characterizing (a) mesh quality, (b) execution time, and (c) traits of the algorithm that could affect quality or execution time differently for different meshes. It also details the method used to debug the several hundred subdivision templates that the algorithm relies upon. Mesh quality is on par with other similar refinement schemes and throughput on modern hardware can exceed 600,000 output tetrahedra per second. But if you want to understand the traits of the algorithm, you have to read the report!
Hybrid Surface Mesh Adaptation for Climate Modeling
Ahmed Khamayseh; Valmor de Almeida; Glen Hansen
2008-10-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
Hybrid Surface Mesh Adaptation for Climate Modeling
Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen
2008-01-01
Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.
NASA Astrophysics Data System (ADS)
Heard, Gary Wayne
A new approach to solution-adaptive grid refinement using the finite element method and Flowfield-Dependent Variation (FDV) theory applied to the Navier-Stokes system of equations is discussed. Flowfield-Dependent Variation (FDV) parameters are introduced into a modified Taylor series expansion of the conservation variables, with the Navier-Stokes system of equations substituted into the Taylor series. The FDV parameters are calculated from the current Fowfield conditions, and automatically adjust the resulting equations from elliptic to parabolic to hyperbolic in type to assure solution accuracy in evolving fluid flowfields that may consist of interactions between regions of compressible and incompressible flow, viscous and inviscid flow, and turbulent and laminar flow. The system of equations is solved using an element-by-element iterative GMRES solver with the elements grouped together to allow the element operations to be performed in parallel. The FDV parameters play many roles in the numerical scheme. One of these roles is to control formations of shock wave discontinuities in high speeds and pressure oscillations in low speeds. To demonstrate these abilities, various example problems are shown, including supersonic flows over a flat plate and a compression corner, and flows involving triple shock waves generated on fin geometries for high speed compressible flows. Furthermore, analysis of low speed incompressible flows is presented in the form of flow in a lid-driven cavity at various Reynolds numbers. Another role of the FDV parameters is their use as error indicators for a solution-adaptive mesh. The finite element grid is refined as dictated by the magnitude of the FDV parameters. Examples of adaptive grids generated using the FDV parameters as error indicators are presented for supersonic flow over flat plate/compression ramp combinations in both two and three dimensions. Grids refined using the FDV parameters as error indicators are comparable to ones
Application of local mesh refinement in the DSMC method
NASA Astrophysics Data System (ADS)
Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.
2001-08-01
The implementation of an adaptive mesh embedding (h-refinement) schemes using unstructured grid in two-dimensional Direct Simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new meshes where local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging mode, by introducing anisotropic refinement in the interfacial cells. This is completed by simply connect the hanging node(s) with the other non-hanging node(s) in the non-refined, interfacial cells. In contrast, this remedy increases negligible amount of work; however, it removes all the difficulties presented in the first scheme with hanging nodes. We have tested the proposed scheme for Argon gas using different types of mesh, such as triangular and quadrilateral or mixed, to high-speed driven cavity flow. The results show an improved flow resolution as compared with that of unadaptive mesh. Finally, we have triangular adaptive mesh to compute two near-continuum gas flows, including a supersonic flow over a cylinder and a supersonic flow over a 35° compression ramp. The results show fairly good agreement with previous studies. In summary, the computational penalties by the proposed adaptive schemes are found to be small as compared with the DSMC computation itself. Nevertheless, we have concluded that the proposed scheme is superior to the original unadaptive scheme considering the accuracy of the solution.
Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1997-01-01
An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.
Efficient triangular adaptive meshes for tsunami simulations
NASA Astrophysics Data System (ADS)
Behrens, J.
2012-04-01
With improving technology and increased sensor density for accurate determination of tsunamogenic earthquake source parameters and consecutively uplift distribution, real-time simulations of even near-field tsunami hazard appears feasible in the near future. In order to support such efforts a new generation of tsunami models is currently under development. These models comprise adaptively refined meshes, in order to save computational resources (in areas of low wave activity) and still represent the inherently multi-scale behavior of a tsunami approaching coastal waters. So far, these methods have been based on oct-tree quadrilateral refinement. The method introduced here is based on binary tree refinement on triangular grids. By utilizing the structure stemming from the refinement strategy, a very efficient method can be achieved, with a triangular mesh, able to accurately represent complex boundaries.
Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
Cubit Adaptive Meshing Algorithm Library
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Electrostatic PIC with adaptive Cartesian mesh
NASA Astrophysics Data System (ADS)
Kolobov, Vladimir; Arslanbekov, Robert
2016-05-01
We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.
Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.
2015-10-01
The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.
Adaptive and Unstructured Mesh Cleaving
Bronson, Jonathan R.; Sastry, Shankar P.; Levine, Joshua A.; Whitaker, Ross T.
2015-01-01
We propose a new strategy for boundary conforming meshing that decouples the problem of building tetrahedra of proper size and shape from the problem of conforming to complex, non-manifold boundaries. This approach is motivated by the observation that while several methods exist for adaptive tetrahedral meshing, they typically have difficulty at geometric boundaries. The proposed strategy avoids this conflict by extracting the boundary conforming constraint into a secondary step. We first build a background mesh having a desired set of tetrahedral properties, and then use a generalized stenciling method to divide, or “cleave”, these elements to get a set of conforming tetrahedra, while limiting the impacts cleaving has on element quality. In developing this new framework, we make several technical contributions including a new method for building graded tetrahedral meshes as well as a generalization of the isosurface stuffing and lattice cleaving algorithms to unstructured background meshes. PMID:26137171
Details of tetrahedral anisotropic mesh adaptation
NASA Astrophysics Data System (ADS)
Jensen, Kristian Ejlebjerg; Gorman, Gerard
2016-04-01
We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.
NASA Astrophysics Data System (ADS)
Valdivia, Valeska; Hennebelle, Patrick
2014-11-01
Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We
Adaptive triangular mesh generation
NASA Technical Reports Server (NTRS)
Erlebacher, G.; Eiseman, P. R.
1984-01-01
A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.
Serial and parallel dynamic adaptation of general hybrid meshes
NASA Astrophysics Data System (ADS)
Kavouklis, Christos
The Navier-Stokes equations are a standard mathematical representation of viscous fluid flow. Their numerical solution in three dimensions remains a computationally intensive and challenging task, despite recent advances in computer speed and memory. A strategy to increase accuracy of Navier-Stokes simulations, while maintaining computing resources to a minimum, is local refinement of the associated computational mesh in regions of large solution gradients and coarsening in regions where the solution does not vary appreciably. In this work we consider adaptation of general hybrid meshes for Computational Fluid Dynamics (CFD) applications. Hybrid meshes are composed of four types of elements; hexahedra, prisms, pyramids and tetrahedra, and have been proven a promising technology in accurately resolving fluid flow for complex geometries. The first part of this dissertation is concerned with the design and implementation of a serial scheme for the adaptation of general three dimensional hybrid meshes. We have defined 29 refinement types, for all four kinds of elements. The core of the present adaptation scheme is an iterative algorithm that flags mesh edges for refinement, so that the adapted mesh is conformal. Of primary importance is considered the design of a suitable dynamic data structure that facilitates refinement and coarsening operations and furthermore minimizes memory requirements. A special dynamic list is defined for mesh elements, in contrast with the usual tree structures. It contains only elements of the current adaptation step and minimal information that is utilized to reconstruct parent elements when the mesh is coarsened. In the second part of this work, a new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid meshes is presented. Partitioning of a hybrid mesh reduces to partitioning of the corresponding dual graph. Communication among processors is based on the faces of the interpartition boundary. The distributed
Toward a consistent framework for high order mesh refinement schemes in numerical relativity
NASA Astrophysics Data System (ADS)
Mongwane, Bishop
2015-05-01
It has now become customary in the field of numerical relativity to couple high order finite difference schemes to mesh refinement algorithms. To this end, different modifications to the standard Berger-Oliger adaptive mesh refinement algorithm have been proposed. In this work we present a fourth order stable mesh refinement scheme with sub-cycling in time for numerical relativity. We do not use buffer zones to deal with refinement boundaries but explicitly specify boundary data for refined grids. We argue that the incompatibility of the standard mesh refinement algorithm with higher order Runge Kutta methods is a manifestation of order reduction phenomena, caused by inconsistent application of boundary data in the refined grids. Our scheme also addresses the problem of spurious reflections that are generated when propagating waves cross mesh refinement boundaries. We introduce a transition zone on refined levels within which the phase velocity of propagating modes is allowed to decelerate in order to smoothly match the phase velocity of coarser grids. We apply the method to test problems involving propagating waves and show a significant reduction in spurious reflections.
Parallel tetrahedral mesh adaptation with dynamic load balancing
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
2000-06-28
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Interface Conditions for Wave Propagation Through Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Choi, Dae-II; Brown, J. David; Imbiriba, Breno; Centrella, Joan; MacNeice, Peter
2002-01-01
We study the propagation of waves across fixed mesh refinement boundaries in linear and nonlinear model equations in 1-D and 2-D, and in the 3-D Einstein equations of general relativity. We demonstrate that using linear interpolation to set the data in guard cells leads to the production of reflected waves at the refinement boundaries. Implementing quadratic interpolation to fill the guard cells eliminates these spurious signals.
Interface conditions for wave propagation through mesh refinement boundaries
NASA Astrophysics Data System (ADS)
Choi, Dae-Il; David Brown, J.; Imbiriba, Breno; Centrella, Joan; MacNeice, Peter
2004-01-01
We study the propagation of waves across fixed mesh refinement boundaries in linear and nonlinear model equations in 1-D and 2-D, and in the 3-D Einstein equations of general relativity. We demonstrate that using linear interpolation to set the data in guard cells leads to the production of reflected waves at the refinement boundaries. Implementing quadratic interpolation to fill the guard cells suppresses these spurious signals.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at
Parallel adaptation of general three-dimensional hybrid meshes
Kavouklis, Christos Kallinderis, Yannis
2010-05-01
A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.
Kohn, S.; Weare, J.; Ong, E.; Baden, S.
1997-05-01
We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.
h-Refinement for simple corner balance scheme of SN transport equation on distorted meshes
NASA Astrophysics Data System (ADS)
Yang, Rong; Yuan, Guangwei
2016-11-01
The transport sweep algorithm is a common method for solving discrete ordinate transport equation, but it breaks down once a concave cell appears in spatial meshes. To deal with this issue a local h-refinement for simple corner balance (SCB) scheme of SN transport equation on arbitrary quadrilateral meshes is presented in this paper by using a new subcell partition. It follows that a hybrid mesh with both triangle and quadrilateral cells is generated, and the geometric quality of these cells improves, especially it is ensured that all cells become convex. Combining with the original SCB scheme, an adaptive transfer algorithm based on the hybrid mesh is constructed. Numerical experiments are presented to verify the utility and accuracy of the new algorithm, especially for some application problems such as radiation transport coupled with Lagrangian hydrodynamic flow. The results show that it performs well on extremely distorted meshes with concave cells, on which the original SCB scheme does not work.
Procedures and computer programs for telescopic mesh refinement using MODFLOW
Leake, Stanley A.; Claar, David V.
1999-01-01
Ground-water models are commonly used to evaluate flow systems in areas that are small relative to entire aquifer systems. In many of these analyses, simulation of the entire flow system is not desirable or will not allow sufficient detail in the area of interest. The procedure of telescopic mesh refinement allows use of a small, detailed model in the area of interest by taking boundary conditions from a larger model that encompasses the model in the area of interest. Some previous studies have used telescopic mesh refinement; however, better procedures are needed in carrying out telescopic mesh refinement using the U.S. Geological Survey ground-water flow model, referred to as MODFLOW. This report presents general procedures and three computer programs for use in telescopic mesh refinement with MODFLOW. The first computer program, MODTMR, constructs MODFLOW data sets for a local or embedded model using MODFLOW data sets and simulation results from a regional or encompassing model. The second computer program, TMRDIFF, provides a means of comparing head or drawdown in the local model with head or drawdown in the corresponding area of the regional model. The third program, RIVGRID, provides a means of constructing data sets for the River Package, Drain Package, General-Head Boundary Package, and Stream Package for regional and local models using grid-independent data specifying locations of these features. RIVGRID may be needed in some applications of telescopic mesh refinement because regional-model data sets do not contain enough information on locations of head-dependent flow features to properly locate the features in local models. The program is a general utility program that can be used in constructing data sets for head-dependent flow packages for any MODFLOW model under construction.
An adaptive level set segmentation on a triangulated mesh.
Xu, Meihe; Thompson, Paul M; Toga, Arthur W
2004-02-01
Level set methods offer highly robust and accurate methods for detecting interfaces of complex structures. Efficient techniques are required to transform an interface to a globally defined level set function. In this paper, a novel level set method based on an adaptive triangular mesh is proposed for segmentation of medical images. Special attention is paid to an adaptive mesh refinement and redistancing technique for level set propagation, in order to achieve higher resolution at the interface with minimum expense. First, a narrow band around the interface is built in an upwind fashion. An active square technique is used to determine the shortest distance correspondence (SDC) for each grid vertex. Simultaneously, we also give an efficient approach for signing the distance field. Then, an adaptive improvement algorithm is proposed, which essentially combines two basic techniques: a long-edge-based vertex insertion strategy, and a local improvement. These guarantee that the refined triangulation is related to features along the front and has elements with appropriate size and shape, which fit the front well. We propose a short-edge elimination scheme to coarsen the refined triangular mesh, in order to reduce the extra storage. Finally, we reformulate the general evolution equation by updating 1) the velocities and 2) the gradient of level sets on the triangulated mesh. We give an approach for tracing contours from the level set on the triangulated mesh. Given a two-dimensional image with N grids along a side, the proposed algorithms run in O(kN) time at each iteration. Quantitative analysis shows that our algorithm is of first order accuracy; and when the interface-fitted property is involved in the mesh refinement, both the convergence speed and numerical accuracy are greatly improved. We also analyze the effect of redistancing frequency upon convergence speed and accuracy. Numerical examples include the extraction of inner and outer surfaces of the cerebral cortex
Spherical Harmonic Decomposition of Gravitational Waves Across Mesh Refinement Boundaries
NASA Technical Reports Server (NTRS)
Fiske, David R.; Baker, John; vanMeter, James R.; Centrella, Joan M.
2005-01-01
We evolve a linearized (Teukolsky) solution of the Einstein equations with a non-linear Einstein solver. Using this testbed, we are able to show that such gravitational waves, defined by the Weyl scalars in the Newman-Penrose formalism, propagate faithfully across mesh refinement boundaries, and use, for the first time to our knowledge, a novel algorithm due to Misner to compute spherical harmonic components of our waveforms. We show that the algorithm performs extremely well, even when the extraction sphere intersects refinement boundaries.
Evolving a Puncture Black Hole with Fixed Mesh Refinement
NASA Technical Reports Server (NTRS)
Imbiriba, Breno; Baker, John; Choi, Dae-II; Centrella, Joan; Fiske. David R.; Brown, J. David; vanMeter, James R.; Olson, Kevin
2004-01-01
We present a detailed study of the effects of mesh refinement boundaries on the convergence and stability of simulations of black hole spacetimes. We find no technical problems. In our applications of this technique to the evolution of puncture initial data, we demonstrate that it is possible to simulaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult.
Optimal fully adaptive wormhole routing for meshes
Schwiebert, L.; Jayasimha, D.N.
1993-12-31
A deadlock-free fully adaptive routing algorithm for 2D meshes which is optimal in the number of virtual channels required and in the number of restrictions placed on the use of these virtual channels is presented. The routing algorithm imposes less than half as many routing restrictions as any previous fully adaptive routing algorithm. It is also proved that, ignoring symmetry, this routing algorithm is the only fully adaptive routing algorithm that achieves both of these goals. The implementation of the routing algorithm requires relatively simple router control logic. The new algorithm is extended, in a straightforward manner to arbitrary dimension meshes. It needs only 4n-2 virtual channels, the minimum number for an n-dimensional mesh. All previous algorithms require an exponential number of virtual channels in the dimension of the mesh.
Unstructured and adaptive mesh generation for high Reynolds number viscous flows
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1991-01-01
A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.
Curved Mesh Correction And Adaptation Tool to Improve COMPASS Electromagnetic Analyses
Luo, X.; Shephard, M.; Lee, L.Q.; Ng, C.; Ge, L.; /SLAC
2011-11-14
SLAC performs large-scale simulations for the next-generation accelerator design using higher-order finite elements. This method requires using valid curved meshes and adaptive mesh refinement in complex 3D curved domains to achieve its fast rate of convergence. ITAPS has developed a procedure to address those mesh requirements to enable petascale electromagnetic accelerator simulations by SLAC. The results demonstrate that those correct valid curvilinear meshes can not only make the simulation more reliable but also improve computational efficiency up to 30%. This paper presents a procedure to track moving adaptive mesh refinement in curved domains. The procedure is capable of generating suitable curvilinear meshes to enable large-scale accelerator simulations. The procedure can generate valid curved meshes with substantially fewer elements to improve the computational efficiency and reliability of the COMPASS electromagnetic analyses. Future work will focus on the scalable parallelization of all steps for petascale simulations.
Grid adaptation using chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaptation using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Mesh refinement for uncertainty quantification through model reduction
Li, Jing Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory.
Fully Threaded Tree for Adaptive Refinement Fluid Dynamics Simulations
NASA Technical Reports Server (NTRS)
Khokhlov, A. M.
1997-01-01
A fully threaded tree (FTT) for adaptive refinement of regular meshes is described. By using a tree threaded at all levels, tree traversals for finding nearest neighbors are avoided. All operations on a tree including tree modifications are O(N), where N is a number of cells, and are performed in parallel. An efficient implementation of the tree is described that requires 2N words of memory. A filtering algorithm for removing high frequency noise during mesh refinement is described. A FTT can be used in various numerical applications. In this paper, it is applied to the integration of the Euler equations of fluid dynamics. An adaptive mesh time stepping algorithm is described in which different time steps are used at different l evels of the tree. Time stepping and mesh refinement are interleaved to avoid extensive buffer layers of fine mesh which were otherwise required ahead of moving shocks. Test examples are presented, and the FTT performance is evaluated. The three dimensional simulation of the interaction of a shock wave and a spherical bubble is carried out that shows the development of azimuthal perturbations on the bubble surface.
Evolving a puncture black hole with fixed mesh refinement
Imbiriba, Breno; Baker, John; Centrella, Joan; Meter, James R. van; Choi, Dae-Il; Fiske, David R.; Brown, J. David; Olson, Kevin
2004-12-15
We present an algorithm for treating mesh refinement interfaces in numerical relativity. We discuss the behavior of the solution near such interfaces located in the strong-field regions of dynamical black hole spacetimes, with particular attention to the convergence properties of the simulations. In our applications of this technique to the evolution of puncture initial data with vanishing shift, we demonstrate that it is possible to simultaneously maintain second order convergence near the puncture and extend the outer boundary beyond 100M, thereby approaching the asymptotically flat region in which boundary condition problems are less difficult and wave extraction is meaningful.
NASA Astrophysics Data System (ADS)
Todarello, Giovanni; Vonck, Floris; Bourasseau, Sébastien; Peter, Jacques; Désidéri, Jean-Antoine
2016-05-01
A new goal-oriented mesh adaptation method for finite volume/finite difference schemes is extended from the structured mesh framework to a more suitable setting for adaptation of unstructured meshes. The method is based on the total derivative of the goal with respect to volume mesh nodes that is computable after the solution of the goal discrete adjoint equation. The asymptotic behaviour of this derivative is assessed on regularly refined unstructured meshes. A local refinement criterion is derived from the requirement of limiting the first order change in the goal that an admissible node displacement may cause. Mesh adaptations are then carried out for classical test cases of 2D Euler flows. Efficiency and local density of the adapted meshes are presented. They are compared with those obtained with a more classical mesh adaptation method in the framework of finite volume/finite difference schemes [46]. Results are very close although the present method only makes usage of the current grid.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; Alder, Berni J.
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less
Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement
Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; Alder, Berni J.
2013-12-10
A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examples highlighting the mesh adaptivity of this method are also provided.
Variational Mesh Adaptation: Isotropy and Equidistribution
NASA Astrophysics Data System (ADS)
Huang, Weizhang
2001-12-01
We present a new approach for developing more robust and error-oriented mesh adaptation methods. Specifically, assuming that a regular (in cell shape) and uniform (in cell size) computational mesh is used (as is commonly done in computation), we develop a criterion for mesh adaptation based on an error function whose definition is motivated by the analysis of function variation and local error behavior for linear interpolation. The criterion is then decomposed into two aspects, the isotropy (or conformity) and uniformity (or equidistribution) requirements, each of which can be easier to deal with. The functionals that satisfy these conditions approximately are constructed using discrete and continuous inequalities. A new functional is finally formulated by combining the functionals corresponding to the isotropy and uniformity requirements. The features of the functional are analyzed and demonstrated by numerical results. In particular, unlike the existing mesh adaptation functionals, the new functional has clear geometric meanings of minimization. A mesh that has the desired properties of isotropy and equidistribution can be obtained by properly choosing the values of two parameters. The analysis presented in this article also provides a better understanding of the increasingly popular method of harmonic mapping in two dimensions.
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less
Multigrid solution strategies for adaptive meshing problems
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1995-01-01
This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.
Local mesh refinement for incompressible fluid flow with free surfaces
Terasaka, H.; Kajiwara, H.; Ogura, K.
1995-09-01
A new local mesh refinement (LMR) technique has been developed and applied to incompressible fluid flows with free surface boundaries. The LMR method embeds patches of fine grid in arbitrary regions of interest. Hence, more accurate solutions can be obtained with a lower number of computational cells. This method is very suitable for the simulation of free surface movements because free surface flow problems generally require a finer computational grid to obtain adequate results. By using this technique, one can place finer grids only near the surfaces, and therefore greatly reduce the total number of cells and computational costs. This paper introduces LMR3D, a three-dimensional incompressible flow analysis code. Numerical examples calculated with the code demonstrate well the advantages of the LMR method.
NASA Astrophysics Data System (ADS)
Papadakis, A. P.; Georghiou, G. E.; Metaxas, A. C.
2008-12-01
A new adaptive mesh generator has been developed and used in the analysis of high-pressure gas discharges, such as avalanches and streamers, reducing computational times and computer memory needs significantly. The new adaptive mesh generator developed, uses normalized error indicators, varying from 0 to 1, to guarantee optimal mesh resolution for all carriers involved in the analysis. Furthermore, it uses h- and r-refinement techniques such as mesh jiggling, edge swapping and node addition/removal to develop an element quality improvement algorithm that improves the mesh quality significantly and a fast and accurate algorithm for interpolation between meshes. Finally, the mesh generator is applied in the characterization of the transition from a single electron to the avalanche and streamer discharges in high-voltage, high-pressure gas discharges for dc 1 mm gaps, RF 1 cm point-plane gaps and parallel-plate 40 MHz configurations, in ambient atmospheric air.
Anisotropic mesh adaptation on Lagrangian Coherent Structures
NASA Astrophysics Data System (ADS)
Miron, Philippe; Vétel, Jérôme; Garon, André; Delfour, Michel; Hassan, Mouhammad El
2012-08-01
The finite-time Lyapunov exponent (FTLE) is extensively used as a criterion to reveal fluid flow structures, including unsteady separation/attachment surfaces and vortices, in laminar and turbulent flows. However, for large and complex problems, flow structure identification demands computational methodologies that are more accurate and effective. With this objective in mind, we propose a new set of ordinary differential equations to compute the flow map, along with its first (gradient) and second order (Hessian) spatial derivatives. We show empirically that the gradient of the flow map computed in this way improves the pointwise accuracy of the FTLE field. Furthermore, the Hessian allows for simple interpolation error estimation of the flow map, and the construction of a continuous optimal and multiscale Lp metric. The Lagrangian particles, or nodes, are then iteratively adapted on the flow structures revealed by this metric. Typically, the L1 norm provides meshes best suited to capturing small scale structures, while the L∞ norm provides meshes optimized to capture large scale structures. This means that the mesh density near large scale structures will be greater with the L∞ norm than with the L1 norm for the same mesh complexity, which is why we chose this technique for this paper. We use it to optimize the mesh in the vicinity of LCS. It is found that Lagrangian Coherent Structures are best revealed with the minimum number of vertices with the L∞ metric.
NASA Astrophysics Data System (ADS)
Keating, E. H.; Vesselinov, V. V.
2001-12-01
We are evaluating several alternative approaches to the general problem of simulating site-scale flow and transport using fine grid resolution while maintaining consistency with a regional-scale, coarse-grid flow model. In this paper, we use the example of modeling capture zones for water supply wells on the Pajarito Plateau in Northern New Mexico, using the finite-element heat and mass simulator FEHM. We compare two different models: 1) a basin-scale model (~6400 km2) using adaptive mesh refinement to increase grid resolution in the vicinity of the water supply well fields, and 2) a site-scale model (~560km2) which is coupled to the basin-scale model via specified fluxes along lateral site-scale boundaries. The goals of this study are to estimate capture zones and to determine the robustness of these estimates given uncertainty in the model parameter estimates and fluxes along site-scale boundaries. There are two primary advantages of the site-scale-model approach. It allows us to increase the vertical grid resolution and hence better represent site-scale heterogeneity, and with it we are able to apply on the water table a more spatially-detailed distribution of recharge. The primary disadvantages of this approach are difficulties related to 1) transferring basin-model fluxes to lateral site-scale-model boundaries and 2) parameter estimation within the coupled-model framework. Using the parameter estimation code (PEST), we calibrated the basin model against the head and flux datasets, estimated fluxes to the lateral boundaries of the site-scale model, and determined their uncertainty. We used these predicted fluxes as lateral boundary conditions in the site-scale model calibration runs. Sensitivity analyses demonstrated that predictions of capture zones using either modeling approach are sensitive to permeability values for a few key hydrostratigraphic units. The uncertainty in some of these key parameters was lower for the basin model than for the site
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel
The direct simulation Monte Carlo method using unstructured adaptive mesh and its application
NASA Astrophysics Data System (ADS)
Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.
2002-02-01
The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright
Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis
NASA Astrophysics Data System (ADS)
Yue, Zhihua
2005-11-01
The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems
Automatic off-body overset adaptive Cartesian mesh method based on an octree approach
Peron, Stephanie; Benoit, Christophe
2013-01-01
This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.
Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme
NASA Astrophysics Data System (ADS)
Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi
2004-02-01
A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.
Parallel Processing of Adaptive Meshes with Load Balancing
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2001-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.
Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.
Applications of automatic mesh generation and adaptive methods in computational medicine
Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.
1995-12-31
Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.
Adaptive meshing technique applied to an orthopaedic finite element contact problem.
Roarty, Colleen M; Grosland, Nicole M
2004-01-01
Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.
An adaptive mesh finite volume method for the Euler equations of gas dynamics
NASA Astrophysics Data System (ADS)
Mungkasi, Sudi
2016-06-01
The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.
Lober, R.R.; Tautges, T.J.; Vaughan, C.T.
1997-03-01
Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.
NASA Technical Reports Server (NTRS)
Ramakrishnan, R.; Wieting, Allan R.; Thornton, Earl A.
1990-01-01
An adaptive mesh refinement procedure that uses nodeless variables and quadratic interpolation functions is presented for analyzing transient thermal problems. A temperature based finite element scheme with Crank-Nicolson time marching is used to obtain the thermal solution. The strategies used for mesh adaption, computing refinement indicators, and time marching are described. Examples in one and two dimensions are presented and comparisons are made with exact solutions. The effectiveness of this procedure for transient thermal analysis is reflected in good solution accuracy, reduction in number of elements used, and computational efficiency.
3D Compressible Melt Transport with Mesh Adaptivity
NASA Astrophysics Data System (ADS)
Dannberg, J.; Heister, T.
2015-12-01
Melt generation and migration have been the subject of numerous investigations. However, their typical time and length scales are vastly different from mantle convection, and the material properties are highly spatially variable and make the problem strongly non-linear. These challenges make it difficult to study these processes in a unified framework and in three dimensions. We present our extension of the mantle convection code ASPECT that allows for solving additional equations describing the behavior of melt percolating through and interacting with a viscously deforming host rock. One particular advantage is ASPECT's adaptive mesh refinement, as the resolution can be increased in areas where melt is present and viscosity gradients are steep, whereas a lower resolution is sufficient in regions without melt. Our approach includes both melt migration and melt generation, allowing for different melting parametrizations. In contrast to previous formulations, we consider the individual compressibilities of the solid and fluid phases in addition to compaction flow. This ensures self-consistency when linking melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We evaluate the functionality and potential of this method using a series of benchmarks and applications, including solitary waves, magmatic shear bands and melt generation and transport in a rising mantle plume. We compare results of the compressible and incompressible formulation and find melt volume differences of up to 15%. Moreover, we demonstrate that adaptive mesh refinement has the potential to reduce the runtime of a computation by more than one order of magnitude. Our model of magma dynamics provides a framework for investigating links between the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modeling the generation of komatiites or other melts originating in greater depths.
Local Mesh Refinement in the Space-Time CE/SE Method
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor
2000-01-01
A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.
A multiblock/multilevel mesh refinement procedure for CFD computations
NASA Astrophysics Data System (ADS)
Teigland, Rune; Eliassen, Inge K.
2001-07-01
A multiblock/multilevel algorithm with local refinement for general two- and three-dimensional fluid flow is presented. The patched-based local refinement procedure is presented in detail and algorithmic implementations are also presented. The multiblock implementation is essentially block-unstructured, i.e. each block having its own local curvilinear co-ordinate system. Refined grid patches can be put anywhere in the computational domain and can extend across block boundaries. To simplify the implementation, while still maintaining sufficient generality, the refinement is restricted to a refinement of the grid successively halving the grid size within a selected patch. The multiblock approach is implemented within the framework of the well-known SIMPLE solution strategy. Computational experiments showing the effect of using the multilevel solution procedure are presented for a sample elliptic problem and a few benchmark problems of computational fluid dynamics (CFD). Copyright
A unified framework for mesh refinement in random and physical space
NASA Astrophysics Data System (ADS)
Li, Jing; Stinis, Panos
2016-10-01
In recent work we have shown how an accurate reduced model can be utilized to perform mesh refinement in random space. That work relied on the explicit knowledge of an accurate reduced model which is used to monitor the transfer of activity from the large to the small scales of the solution. Since this is not always available, we present in the current work a framework which shares the merits and basic idea of the previous approach but does not require an explicit knowledge of a reduced model. Moreover, the current framework can be applied for refinement in both random and physical space. In this manuscript we focus on the application to random space mesh refinement. We study examples of increasing difficulty (from ordinary to partial differential equations) which demonstrate the efficiency and versatility of our approach. We also provide some results from the application of the new framework to physical space mesh refinement.
Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations
NASA Technical Reports Server (NTRS)
Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.
2006-01-01
Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.
NASA Astrophysics Data System (ADS)
Areias, P.; Rabczuk, T.; de Sá, J. César
2016-09-01
We propose an alternative crack propagation algorithm which effectively circumvents the variable transfer procedure adopted with classical mesh adaptation algorithms. The present alternative consists of two stages: a mesh-creation stage where a local damage model is employed with the objective of defining a crack-conforming mesh and a subsequent analysis stage with a localization limiter in the form of a modified screened Poisson equation which is exempt of crack path calculations. In the second stage, the crack naturally occurs within the refined region. A staggered scheme for standard equilibrium and screened Poisson equations is used in this second stage. Element subdivision is based on edge split operations using a constitutive quantity (damage). To assess the robustness and accuracy of this algorithm, we use five quasi-brittle benchmarks, all successfully solved.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
Anisotropic Mesh Adaptivity for Turbulent Flows with Boundary Layers
NASA Astrophysics Data System (ADS)
Chitale, Kedar C.
Turbulent flows are found everywhere in nature and are studied, analyzed and simulated using various experimental and numerical tools. For computational analysis, a variety of turbulence models are available and the accuracy of these models in capturing the phenomenon depends largely on the mesh spacings, especially near the walls, in the boundary layer region. Special semi-structured meshes called "mesh boundary layers" are widely used in the CFD community in simulations of turbulent flows, because of their graded and orthogonal layered structure. They provide an efficient way to achieve very fine and highly anisotropic mesh spacings without introducing poorly shaped elements. Since usually the required mesh spacings to accurately resolve the flow are not known a priori to the simulations, an adaptive approach based on a posteriori error indicators is used to achieve an appropriate mesh. In this study, we apply the adaptive meshing techniques to turbulent flows with a focus on boundary layers. We construct a framework to calculate the critical wall normal mesh spacings inside the boundary layers based on the flow physics and the knowledge of the turbulence model. This approach is combined with numerical error indicators to adapt the entire flow region. We illustrate the effectiveness of this hybrid approach by applying it to three aerodynamic flows and studying their superior performance in capturing the flow structures in detail. We also demonstrate the capabilities of the current developments in parallel boundary layer mesh adaptation by applying them to two internal flow problems. We also study the application of adaptive boundary layer meshes to complex geometries like multi element wings. We highlight the advantage of using such techniques for superior wake and tip region resolution by showcasing flow results. We also outline the future direction for the adaptive meshing techniques to be useful to the large scale flow computations.
Procedure for Adapting Direct Simulation Monte Carlo Meshes
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.
1992-01-01
A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.
Deiterding, Ralf
2011-01-01
Numerical simulation can be key to the understanding of the multi-dimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the non-equilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniques in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, i.e. under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis and a diagram of the transition boundaries between possible Mach reflection structures is constructed.
Deiterding, Ralf
2011-01-01
Numerical simulation can be key to the understanding of the multidimensional nature of transient detonation waves. However, the accurate approximation of realistic detonations is demanding as a wide range of scales needs to be resolved. This paper describes a successful solution strategy that utilizes logically rectangular dynamically adaptive meshes. The hydrodynamic transport scheme and the treatment of the nonequilibrium reaction terms are sketched. A ghost fluid approach is integrated into the method to allow for embedded geometrically complex boundaries. Large-scale parallel simulations of unstable detonation structures of Chapman-Jouguet detonations in low-pressure hydrogen-oxygen-argon mixtures demonstrate the efficiency of the described techniquesmore » in practice. In particular, computations of regular cellular structures in two and three space dimensions and their development under transient conditions, that is, under diffraction and for propagation through bends are presented. Some of the observed patterns are classified by shock polar analysis, and a diagram of the transition boundaries between possible Mach reflection structures is constructed.« less
Adaptive particle refinement and derefinement applied to the smoothed particle hydrodynamics method
NASA Astrophysics Data System (ADS)
Barcarolo, D. A.; Le Touzé, D.; Oger, G.; de Vuyst, F.
2014-09-01
SPH simulations are usually performed with a uniform particle distribution. New techniques have been recently proposed to enable the use of spatially varying particle distributions, which encouraged the development of automatic adaptivity and particle refinement/derefinement algorithms. All these efforts resulted in very interesting and promising procedures leading to more efficient and faster SPH simulations. In this article, a family of particle refinement techniques is reviewed and a new derefinement technique is proposed and validated through several test cases involving both free-surface and viscous flows. Besides, this new procedure allows higher resolutions in the regions requiring increased accuracy. Moreover, several levels of refinement can be used with this new technique, as often encountered in adaptive mesh refinement techniques in mesh-based methods.
Numerical modeling of seismic waves using frequency-adaptive meshes
NASA Astrophysics Data System (ADS)
Hu, Jinyin; Jia, Xiaofeng
2016-08-01
An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.
Adaptive-mesh algorithms for computational fluid dynamics
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Roe, Philip L.; Quirk, James
1993-01-01
The basic goal of adaptive-mesh algorithms is to distribute computational resources wisely by increasing the resolution of 'important' regions of the flow and decreasing the resolution of regions that are less important. While this goal is one that is worthwhile, implementing schemes that have this degree of sophistication remains more of an art than a science. In this paper, the basic pieces of adaptive-mesh algorithms are described and some of the possible ways to implement them are discussed and compared. These basic pieces are the data structure to be used, the generation of an initial mesh, the criterion to be used to adapt the mesh to the solution, and the flow-solver algorithm on the resulting mesh. Each of these is discussed, with particular emphasis on methods suitable for the computation of compressible flows.
NASA Technical Reports Server (NTRS)
Brislawn, Kristi D.; Brown, David L.; Chesshire, Geoffrey S.; Saltzman, Jeffrey S.
1995-01-01
Adaptive mesh refinement (AMR) in conjunction with higher-order upwind finite-difference methods have been used effectively on a variety of problems in two and three dimensions. In this paper we introduce an approach for resolving problems that involve complex geometries in which resolution of boundary geometry is important. The complex geometry is represented by using the method of overlapping grids, while local resolution is obtained by refining each component grid with the AMR algorithm, appropriately generalized for this situation. The CMPGRD algorithm introduced by Chesshire and Henshaw is used to automatically generate the overlapping grid structure for the underlying mesh.
Mesh generation/refinement using fractal concepts and iterated function systems
NASA Technical Reports Server (NTRS)
Bova, S. W.; Carey, G. F.
1992-01-01
A novel method of mesh generation is proposed which is based on the use of fractal concepts to derive contractive, affine transformations. The transformations are constructed in such a manner that the attractors of the resulting maps are a union of the points, lines and surfaces in the domain. In particular, the mesh nodes may be generated recursively as a sequence of points which are obtained by applying the transformations to a coarse background mesh constructed from the given boundary data. A Delaunay triangulation or similar edge connection approach can then be performed on the resulting set of nodes in order to generate the mesh. Local refinement of an existing mesh can also be performed using the procedure. The method is easily extended to three dimensions, in which case the Delaunay triangulation is replaced by an analogous 3D tesselation.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Cosmos++: Relativistic Magnetohydrodynamics on Unstructured Grids with Local Adaptive Refinement
Anninos, P; Fragile, P C; Salmonson, J D
2005-05-06
A new code and methodology are introduced for solving the fully general relativistic magnetohydrodynamic (GRMHD) equations using time-explicit, finite-volume discretization. The code has options for solving the GRMHD equations using traditional artificial-viscosity (AV) or non-oscillatory central difference (NOCD) methods, or a new extended AV (eAV) scheme using artificial-viscosity together with a dual energy-flux-conserving formulation. The dual energy approach allows for accurate modeling of highly relativistic flows at boost factors well beyond what has been achieved to date by standard artificial viscosity methods. it provides the benefit of Godunov methods in capturing high Lorentz boosted flows but without complicated Riemann solvers, and the advantages of traditional artificial viscosity methods in their speed and flexibility. Additionally, the GRMHD equations are solved on an unstructured grid that supports local adaptive mesh refinement using a fully threated oct-tree (in three dimensions) network to traverse the grid hierarchy across levels and immediate neighbors. A number of tests are presented to demonstrate robustness of the numerical algorithms and adaptive mesh framework over a wide spectrum of problems, boosts, and astrophysical applications, including relativistic shock tubes, shock collisions, magnetosonic shocks, Alfven wave propagation, blast waves, magnetized Bondi flow, and the magneto-rotational instability in Kerr black hole spacetimes.
An Efficient Dynamically Adaptive Mesh for Potentially Singular Solutions
NASA Astrophysics Data System (ADS)
Ceniceros, Hector D.; Hou, Thomas Y.
2001-09-01
We develop an efficient dynamically adaptive mesh generator for time-dependent problems in two or more dimensions. The mesh generator is motivated by the variational approach and is based on solving a new set of nonlinear elliptic PDEs for the mesh map. When coupled to a physical problem, the mesh map evolves with the underlying solution and maintains high adaptivity as the solution develops complicated structures and even singular behavior. The overall mesh strategy is simple to implement, avoids interpolation, and can be easily incorporated into a broad range of applications. The efficacy of the mesh is first demonstrated by two examples of blowing-up solutions to the 2-D semilinear heat equation. These examples show that the mesh can follow with high adaptivity a finite-time singularity process. The focus of applications presented here is however the baroclinic generation of vorticity in a strongly layered 2-D Boussinesq fluid, a challenging problem. The moving mesh follows effectively the flow resolving both its global features and the almost singular shear layers developed dynamically. The numerical results show the fast collapse to small scales and an exponential vorticity growth.
Turbulent flow calculations using unstructured and adaptive meshes
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1990-01-01
A method of efficiently computing turbulent compressible flow over complex two dimensional configurations is presented. The method makes use of fully unstructured meshes throughout the entire flow-field, thus enabling the treatment of arbitrarily complex geometries and the use of adaptive meshing techniques throughout both viscous and inviscid regions of flow-field. Mesh generation is based on a locally mapped Delaunay technique in order to generate unstructured meshes with highly-stretched elements in the viscous regions. The flow equations are discretized using a finite element Navier-Stokes solver, and rapid convergence to steady-state is achieved using an unstructured multigrid algorithm. Turbulence modeling is performed using an inexpensive algebraic model, implemented for use on unstructured and adaptive meshes. Compressible turbulent flow solutions about multiple-element airfoil geometries are computed and compared with experimental data.
Adaptive Mesh Enrichment for the Poisson-Boltzmann Equation
NASA Astrophysics Data System (ADS)
Dyshlovenko, Pavel
2001-09-01
An adaptive mesh enrichment procedure for a finite-element solution of the two-dimensional Poisson-Boltzmann equation is described. The mesh adaptation is performed by subdividing the cells using information obtained in the previous step of the solution and next rearranging the mesh to be a Delaunay triangulation. The procedure allows the gradual improvement of the quality of the solution and adjustment of the geometry of the problem. The performance of the proposed approach is illustrated by applying it to the problem of two identical colloidal particles in a symmetric electrolyte.
Adaptive refinement tools for tetrahedral unstructured grids
NASA Technical Reports Server (NTRS)
Pao, S. Paul (Inventor); Abdol-Hamid, Khaled S. (Inventor)
2011-01-01
An exemplary embodiment providing one or more improvements includes software which is robust, efficient, and has a very fast run time for user directed grid enrichment and flow solution adaptive grid refinement. All user selectable options (e.g., the choice of functions, the choice of thresholds, etc.), other than a pre-marked cell list, can be entered on the command line. The ease of application is an asset for flow physics research and preliminary design CFD analysis where fast grid modification is often needed to deal with unanticipated development of flow details.
Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.
Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F
2009-11-28
Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.
Pillowing doublets: Refining a mesh to ensure that faces share at most one edge
Mitchell, S.A.; Tautges, T.J.
1995-11-01
Occasionally one may be confronted by a hexahedral or quadrilateral mesh containing doublets, two faces sharing two edges. In this case, no amount of smoothing will produce a mesh with agreeable element quality: in the planar case, one of these two faces will always have an angle of at least 180 degrees between the two edges. The authors describe a robust scheme for refining a hexahedral or quadrilateral mesh to separate such faces, so that any two faces share at most one edge. Note that this also ensures that two hexahedra share at most one face in the three dimensional case. The authors have implemented this algorithm and incorporated it into the CUBIT mesh generation environment developed at Sandia National Laboratories.
Adaptive upscaling with the dual mesh method
Guerillot, D.; Verdiere, S.
1997-08-01
The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.
Lee, Pilhwa; Griffith, Boyce E; Peskin, Charles S
2010-07-01
We describe an immersed boundary method for problems of fluid-solute-structure interaction. The numerical scheme employs linearly implicit timestepping, allowing for the stable use of timesteps that are substantially larger than those permitted by an explicit method, and local mesh refinement, making it feasible to resolve the steep gradients associated with the space charge layers as well as the chemical potential, which is used in our formulation to control the permeability of the membrane to the (possibly charged) solute. Low Reynolds number fluid dynamics are described by the time-dependent incompressible Stokes equations, which are solved by a cell-centered approximate projection method. The dynamics of the chemical species are governed by the advection-electrodiffusion equations, and our semi-implicit treatment of these equations results in a linear system which we solve by GMRES preconditioned via a fast adaptive composite-grid (FAC) solver. Numerical examples demonstrate the capabilities of this methodology, as well as its convergence properties.
Griffith, Boyce E.; Peskin, Charles S.
2010-01-01
We describe an immersed boundary method for problems of fluid-solute-structure interaction. The numerical scheme employs linearly implicit timestepping, allowing for the stable use of timesteps that are substantially larger than those permitted by an explicit method, and local mesh refinement, making it feasible to resolve the steep gradients associated with the space charge layers as well as the chemical potential, which is used in our formulation to control the permeability of the membrane to the (possibly charged) solute. Low Reynolds number fluid dynamics are described by the time-dependent incompressible Stokes equations, which are solved by a cell-centered approximate projection method. The dynamics of the chemical species are governed by the advection-electrodiffusion equations, and our semi-implicit treatment of these equations results in a linear system which we solve by GMRES preconditioned via a fast adaptive composite-grid (FAC) solver. Numerical examples demonstrate the capabilities of this methodology, as well as its convergence properties. PMID:20454540
Dimensional reduction as a tool for mesh refinement and trackingsingularities of PDEs
Stinis, Panagiotis
2007-06-10
We present a collection of algorithms which utilizedimensional reduction to perform mesh refinement and study possiblysingular solutions of time-dependent partial differential equations. Thealgorithms are inspired by constructions used in statistical mechanics toevaluate the properties of a system near a critical point. The firstalgorithm allows the accurate determination of the time of occurrence ofa possible singularity. The second algorithm is an adaptive meshrefinement scheme which can be used to approach efficiently the possiblesingularity. Finally, the third algorithm uses the second algorithm untilthe available resolution is exhausted (as we approach the possiblesingularity) and then switches to a dimensionally reduced model which,when accurate, can follow faithfully the solution beyond the time ofoccurrence of the purported singularity. An accurate dimensionallyreduced model should dissipate energy at the right rate. We construct twovariants of each algorithm. The first variant assumes that we have actualknowledge of the reduced model. The second variant assumes that we knowthe form of the reduced model, i.e., the terms appearing in the reducedmodel, but not necessarily their coefficients. In this case, we alsoprovide a way of determining the coefficients. We present numericalresults for the Burgers equation with zero and nonzero viscosity toillustrate the use of the algorithms.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Adaptive mesh generation for viscous flows using Delaunay triangulation
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1990-01-01
A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.
Adaptive mesh generation for viscous flows using Delaunay triangulation
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
1988-01-01
A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.
Tsunami modelling with adaptively refined finite volume methods
LeVeque, R.J.; George, D.L.; Berger, M.J.
2011-01-01
Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2003-01-01
An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.
Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems
Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M.; Wilson, P. P.; Sawan, M. E.
2013-07-01
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment
NASA Technical Reports Server (NTRS)
Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott
1995-01-01
The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.
Implementations of mesh refinement schemes for particle-in-cell plasma simulations
Vay, J.-L.; Colella, P.; Friedman, A.; Grote, D.P.; McCorquodale, P.; Serafini, D.B.
2003-10-20
Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations and present two implementations in more detail, with examples.
Kinetic solvers with adaptive mesh in phase space.
Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
Adaptive anisotropic meshing for steady convection-dominated problems
Nguyen, Hoa; Gunzburger, Max; Ju, Lili; Burkardt, John
2009-01-01
Obtaining accurate solutions for convection–diffusion equations is challenging due to the presence of layers when convection dominates the diffusion. To solve this problem, we design an adaptive meshing algorithm which optimizes the alignment of anisotropic meshes with the numerical solution. Three main ingredients are used. First, the streamline upwind Petrov–Galerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor by an anisotropic centroidal Voronoi tessellation algorithm. Our algorithm is tested on a variety of two-dimensional examples and the results shows that the algorithm is robust in detecting layers and efficient in avoiding non-physical oscillations in the numerical approximation.
Local time-space mesh refinement for simulation of elastic wave propagation in multi-scale media
NASA Astrophysics Data System (ADS)
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-01
This paper presents an original approach to local time-space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are the application of temporal and spatial refinement on two different surfaces; the use of the embedded-stencil technique for the refinement of grid step with respect to time; the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Local time–space mesh refinement for simulation of elastic wave propagation in multi-scale media
Kostin, Victor; Lisitsa, Vadim; Reshetova, Galina; Tcheverda, Vladimir
2015-01-15
This paper presents an original approach to local time–space grid refinement for the numerical simulation of wave propagation in models with localized clusters of micro-heterogeneities. The main features of the algorithm are –the application of temporal and spatial refinement on two different surfaces; –the use of the embedded-stencil technique for the refinement of grid step with respect to time; –the use of the Fast Fourier Transform (FFT)-based interpolation to couple variables for spatial mesh refinement. The latter makes it possible to perform filtration of high spatial frequencies, which provides stability in the proposed finite-difference schemes. In the present work, the technique is implemented for the finite-difference simulation of seismic wave propagation and the interaction of such waves with fluid-filled fractures and cavities of carbonate reservoirs. However, this approach is easy to adapt and/or combine with other numerical techniques, such as finite elements, discontinuous Galerkin method, or finite volumes used for approximation of various types of linear and nonlinear hyperbolic equations.
Logically rectangular finite volume methods with adaptive refinement on the sphere.
Berger, Marsha J; Calhoun, Donna A; Helzel, Christiane; LeVeque, Randall J
2009-11-28
The logically rectangular finite volume grids for two-dimensional partial differential equations on a sphere and for three-dimensional problems in a spherical shell introduced recently have nearly uniform cell size, avoiding severe Courant number restrictions. We present recent results with adaptive mesh refinement using the GeoClaw software and demonstrate well-balanced methods that exactly maintain equilibrium solutions, such as shallow water equations for an ocean at rest over arbitrary bathymetry.
Adaptively Refined Euler and Navier-Stokes Solutions with a Cartesian-Cell Based Scheme
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian-cell based scheme with adaptive mesh refinement for solving the Euler and Navier-Stokes equations in two dimensions has been developed and tested. Grids about geometrically complicated bodies were generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells were created using polygon-clipping algorithms. The grid was stored in a binary-tree data structure which provided a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations were solved on the resulting grids using an upwind, finite-volume formulation. The inviscid fluxes were found in an upwinded manner using a linear reconstruction of the cell primitives, providing the input states to an approximate Riemann solver. The viscous fluxes were formed using a Green-Gauss type of reconstruction upon a co-volume surrounding the cell interface. Data at the vertices of this co-volume were found in a linearly K-exact manner, which ensured linear K-exactness of the gradients. Adaptively-refined solutions for the inviscid flow about a four-element airfoil (test case 3) were compared to theory. Laminar, adaptively-refined solutions were compared to accepted computational, experimental and theoretical results.
Unstructured adaptive mesh computations of rotorcraft high-speed impulsive noise
NASA Technical Reports Server (NTRS)
Strawn, Roger; Garceau, Michael; Biswas, Rupak
1993-01-01
A new method is developed for modeling helicopter high-speed impulsive (HSI) noise. The aerodynamics and acoustics near the rotor blade tip are computed by solving the Euler equations on an unstructured grid. A stationary Kirchhoff surface integral is then used to propagate these acoustic signals to the far field. The near-field Euler solver uses a solution-adaptive grid scheme to improve the resolution of the acoustic signal. Grid points are locally added and/or deleted from the mesh at each adaptive step. An important part of this procedure is the choice of an appropriate error indicator. The error indicator is computed from the flow field solution and determines the regions for mesh coarsening and refinement. Computed results for HSI noise compare favorably with experimental data for three different hovering rotor cases.
Vay, J.L.; Colella, P.; McCorquodale, P.; Van Straalen, B.; Friedman, A.; Grote, D.P.
2002-05-24
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and simulation of the power plant as a whole, or even of the driver, is not yet possible. Despite the rapid progress in computer power, past and anticipated, one must consider the use of the most advanced numerical techniques, if they are to reach the goal expeditiously. One of the difficulties of these simulations resides in the disparity of scales, in time and in space, which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g., fluid dynamics simulations) is the mesh refinement technique. They discuss the challenges posed by the implementation of this technique into plasma simulations (due to the presence of particles and electromagnetic waves). They present the prospects for and projected benefits of its application to heavy ion fusion, in particular to the simulation of the ion source and the final beam propagation in the chamber. A Collaboration project is under way at LBNL between the Applied Numerical Algorithms Group (ANAG) and the HIF group to couple the Adaptive Mesh Refinement (AMR) library CHOMBO developed by the ANAG group to the Particle-In-Cell accelerator code (WARP) developed by the HIF-VNL. They describe their progress and present their initial findings.
Yaqi Wang; Jean C. Ragusa
2011-10-01
Diffusion synthetic acceleration (DSA) schemes compatible with adaptive mesh refinement (AMR) grids are derived for the SN transport equations discretized using high-order discontinuous finite elements. These schemes are directly obtained from the discretized transport equations by assuming a linear dependence in angle of the angular flux along with an exact Fick's law and, therefore, are categorized as partially consistent. These schemes are akin to the symmetric interior penalty technique applied to elliptic problems and are all based on a second-order discontinuous finite element discretization of a diffusion equation (as opposed to a mixed or P1 formulation). Therefore, they only have the scalar flux as unknowns. A Fourier analysis has been carried out to determine the convergence properties of the three proposed DSA schemes for various cell optical thicknesses and aspect ratios. Out of the three DSA schemes derived, the modified interior penalty (MIP) scheme is stable and effective for realistic problems, even with distorted elements, but loses effectiveness for some highly heterogeneous configurations. The MIP scheme is also symmetric positive definite and can be solved efficiently with a preconditioned conjugate gradient method. Its implementation in an AMR SN transport code has been performed for both source iteration and GMRes-based transport solves, with polynomial orders up to 4. Numerical results are provided and show good agreement with the Fourier analysis results. Results on AMR grids demonstrate that the cost of DSA can be kept low on locally refined meshes.
Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.
2015-12-01
Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Ragusa, Jean C.
2015-01-01
In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement.
Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images
NASA Astrophysics Data System (ADS)
Pekar, Vladimir; Kaus, Michael R.; Lorenz, Cristian; Lobregt, Steven; Truyen, Roel; Weese, Juergen
2001-07-01
Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.
Anisotropic norm-oriented mesh adaptation for a Poisson problem
NASA Astrophysics Data System (ADS)
Brèthes, Gautier; Dervieux, Alain
2016-10-01
We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.
Configurational forces and variational mesh adaptation in solid dynamics
NASA Astrophysics Data System (ADS)
Zielonka, Matias G.
This thesis is concerned with the exploration and development of a variational finite element mesh adaption framework for non-linear solid dynamics and its conceptual links with the theory of dynamic configurational forces. The distinctive attribute of this methodology is that the underlying variational principle of the problem under study is used to supply both the discretized fields and the mesh on which the discretization is supported. To this end a mixed-multifield version of Hamilton's principle of stationary action and Lagrange-d'Alembert, principle is proposed, a fresh perspective on the theory of dynamic configurational forces is presented, and a unifying variational formulation that generalizes the framework to systems with general dissipative behavior is developed. A mixed finite element formulation with independent spatial interpolations for deformations and velocities and a mixed variational integrator with independent time interpolations for the resulting nodal parameters is constructed. This discretization is supported on a continuously deforming mesh that is not prescribed at the outset but computed as part of the solution. The resulting space-time discretization satisfies exact discrete configurational force balance and exhibits excellent long term global energy stability behavior. The robustness of the mesh adaption framework is assessed and demonstrated with a set of examples and convergence tests.
Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution.
Wang, Daifa; Song, Xiaolei; Bai, Jing
2007-07-23
Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.
AN ADAPTIVE PARTICLE-MESH GRAVITY SOLVER FOR ENZO
Passy, Jean-Claude; Bryan, Greg L.
2014-11-01
We describe and implement an adaptive particle-mesh algorithm to solve the Poisson equation for grid-based hydrodynamics codes with nested grids. The algorithm is implemented and extensively tested within the astrophysical code Enzo against the multigrid solver available by default. We find that while both algorithms show similar accuracy for smooth mass distributions, the adaptive particle-mesh algorithm is more accurate for the case of point masses, and is generally less noisy. We also demonstrate that the two-body problem can be solved accurately in a configuration with nested grids. In addition, we discuss the effect of subcycling, and demonstrate that evolving all the levels with the same timestep yields even greater precision.
Boltzmann Solver with Adaptive Mesh in Velocity Space
Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.
2011-05-20
We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.
NASA Astrophysics Data System (ADS)
Cardall, Christian Y.; Budiardja, Reuben D.; Endeve, Eirik; Mezzacappa, Anthony
2014-02-01
GenASiS (General Astrophysical Simulation System) is a new code being developed initially and primarily, though by no means exclusively, for the simulation of core-collapse supernovae on the world's leading capability supercomputers. This paper—the first in a series—demonstrates a centrally refined coordinate patch suitable for gravitational collapse and documents methods for compressible nonrelativistic hydrodynamics. We benchmark the hydrodynamics capabilities of GenASiS against many standard test problems; the results illustrate the basic competence of our implementation, demonstrate the strengths and limitations of the HLLC relative to the HLL Riemann solver in a number of interesting cases, and provide preliminary indications of the code's ability to scale and to function with cell-by-cell fixed-mesh refinement.
Adaptive radial basis function mesh deformation using data reduction
NASA Astrophysics Data System (ADS)
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited
Mesh adaptation technique for Fourier-domain fluorescence lifetime imaging
Soloviev, Vadim Y.
2006-11-15
A novel adaptive mesh technique in the Fourier domain is introduced for problems in fluorescence lifetime imaging. A dynamical adaptation of the three-dimensional scheme based on the finite volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. Light propagation in the medium is modeled by the telegraph equation, while the lifetime reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the method are demonstrated by image reconstruction of two spherical fluorescent objects embedded in a tissue phantom.
Fluidity: A New Adaptive, Unstructured Mesh Geodynamics Model
NASA Astrophysics Data System (ADS)
Davies, D. R.; Wilson, C. R.; Kramer, S. C.; Piggott, M. D.; Le Voci, G.; Collins, G. S.
2010-05-01
Fluidity is a sophisticated fluid dynamics package, which has been developed by the Applied Modelling and Computation Group (AMCG) at Imperial College London. It has many environmental applications, from nuclear reactor safety to simulations of ocean circulation. Fluidity has state-of-the-art features that place it at the forefront of computational fluid dynamics. The code: Dynamically optimizes the mesh, providing increased resolution in areas of dynamic importance, thus allowing for accurate simulations across a range of length scales, within a single model. Uses an unstructured mesh, which enables the representation of complex geometries. It also enhances mesh optimization using anisotropic elements, which are particularly useful for resolving one-dimensional flow features and material interfaces. Uses implicit solvers thus allowing for large time-steps with minimal loss of accuracy. PETSc provides some of these, though multigrid preconditioning methods have been developed in-house. Is optimized to run on parallel processors and has the ability to perform parallel mesh adaptivity - the subdomains used in parallel computing automatically adjust themselves to balance the computational load on each processor, as the mesh evolves. Has a novel interface-preserving advection scheme for maintaining sharp interfaces between multiple materials / components. Has an automated test-bed for verification of model developments. Such attributes provide an extremely powerful base on which to build a new geodynamical model. Incorporating into Fluidity the necessary physics and numerical technology for geodynamical flows is an ongoing task, though progress, to date, includes: Development and implementation of parallel, scalable solvers for Stokes flow, which can handle sharp, orders of magnitude variations in viscosity and, significantly, an anisotropic viscosity tensor. Modification of the multi-material interface-preserving scheme to allow for tracking of chemical
Simulating the quartic Galileon gravity model on adaptively refined meshes
Li, Baojiu; Barreira, Alexandre; Baugh, Carlton M.; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo; Pascoli, Silvia E-mail: baojiu.li@durham.ac.uk E-mail: wojciech.hellwing@durham.ac.uk E-mail: silvia.pascoli@durham.ac.uk
2013-11-01
We develop a numerical algorithm to solve the high-order nonlinear derivative-coupling equation associated with the quartic Galileon model, and implement it in a modified version of the ramses N-body code to study the effect of the Galileon field on the large-scale matter clustering. The algorithm is tested for several matter field configurations with different symmetries, and works very well. This enables us to perform the first simulations for a quartic Galileon model which provides a good fit to the cosmic microwave background (CMB) anisotropy, supernovae and baryonic acoustic oscillations (BAO) data. Our result shows that the Vainshtein mechanism in this model is very efficient in suppressing the spatial variations of the scalar field. However, the time variation of the effective Newtonian constant caused by the curvature coupling of the Galileon field cannot be suppressed by the Vainshtein mechanism. This leads to a significant weakening of the strength of gravity in high-density regions at late times, and therefore a weaker matter clustering on small scales. We also find that without the Vainshtein mechanism the model would have behaved in a completely different way, which shows the crucial role played by nonlinearities in modified gravity theories and the importance of performing self-consistent N-body simulations for these theories.
Mesh-based enhancement schemes in diffuse optical tomography.
Gu, Xuejun; Xu, Yong; Jiang, Huabei
2003-05-01
Two mesh-based methods including dual meshing and adaptive meshing are developed to improve the finite element-based reconstruction of both absorption and scattering images of heterogeneous turbid media. The idea of dual meshing scheme is to use a fine mesh for the solution of photon propagation and a coarse mesh for the inversion of optical property distributions. The adaptive meshing method is accomplished by the automatic mesh refinement in the region of heterogeneity during reconstruction. These schemes are validated using tissue-like phantom measurements. Our results demonstrate the capabilities of the dual meshing and adaptive meshing in both qualitative and quantitative improvement of optical image reconstruction.
Linking Local- and Aquifer-scale Groundwater Models Using Telescopic Mesh Refinement
NASA Astrophysics Data System (ADS)
Willson, C. S.; Rahman, A.; Milner, R.; Hanson, B.
2001-12-01
Groundwater modeling is a useful tool for evaluating and predicting whether a particular aquifer system is capable of supporting large volumes of groundwater withdrawals over long periods of time and what effect, if any, such activity will have on specific community water supplies, local agricultural and industrial needs, and the regional aquifer or aquifer system as a whole. High-resolution or refined models are necessary for quantification of local processes and phenomena. However, stand-alone refined models do not provide information on regional flow dynamics. Telescopic mesh refinement (TMR) is one technique that can be used to develop high-resolution groundwater models within larger-scale aquifer models. The objective of this study is to utilize TMR to develop parish-level high-resolution models within an existing U.S. Geological Survey groundwater model of the Chicot Aquifer in Southwestern Louisiana. These parish-level models will be used to identify and assess critical groundwater areas. The regional aquifer is used to identify possible long-term problems such as changes in recharge or salt-water encroachment. Issues that must be addressed when linking local and regional models include: incorporation of aquifer stratigraphy, recharge rates, incorporation of individual wells, boundary conditions, and model calibration.
NASA Technical Reports Server (NTRS)
Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.
2014-01-01
Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.
COMET-AR User's Manual: COmputational MEchanics Testbed with Adaptive Refinement
NASA Technical Reports Server (NTRS)
Moas, E. (Editor)
1997-01-01
The COMET-AR User's Manual provides a reference manual for the Computational Structural Mechanics Testbed with Adaptive Refinement (COMET-AR), a software system developed jointly by Lockheed Palo Alto Research Laboratory and NASA Langley Research Center under contract NAS1-18444. The COMET-AR system is an extended version of an earlier finite element based structural analysis system called COMET, also developed by Lockheed and NASA. The primary extensions are the adaptive mesh refinement capabilities and a new "object-like" database interface that makes COMET-AR easier to extend further. This User's Manual provides a detailed description of the user interface to COMET-AR from the viewpoint of a structural analyst.
3D Finite Element Trajectory Code with Adaptive Meshing
NASA Astrophysics Data System (ADS)
Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien
2004-11-01
Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.
An Application of the Mesh Generation and Refinement Tool to Mobile Bay, Alabama, USA
NASA Astrophysics Data System (ADS)
Aziz, Wali; Alarcon, Vladimir J.; McAnally, William; Martin, James; Cartwright, John
2009-08-01
A grid generation tool, called the Mesh Generation and Refinement Tool (MGRT), has been developed using Qt4. Qt4 is a comprehensive C++ application framework which includes GUI and container class-libraries and tools for cross-platform development. MGRT is capable of using several types of algorithms for grid generation. This paper presents an application of the MGRT grid generation tool for creating an unstructured grid of Mobile Bay (Alabama, USA) that will be used for hydrodynamics modeling. The algorithm used in this particular application is the Advancing-Front/Local-Reconnection (AFLR) [1] [2]. This research shows results of grids created with MGRT and compares them to grids (for the same geographical container) created using other grid generation tools. The superior quality of the grids generated by MGRT is shown.
A higher-order implicit IDO scheme and its CFD application to local mesh refinement method
NASA Astrophysics Data System (ADS)
Imai, Yohsuke; Aoki, Takayuki
2006-08-01
The Interpolated Differential Operator (IDO) scheme has been developed for the numerical solution of the fluid motion equations, and allows to produce highly accurate results by introducing the spatial derivative of the physical value as an additional dependent variable. For incompressible flows, semi-implicit time integration is strongly affected by the Courant and diffusion number limitation. A high-order fully-implicit IDO scheme is presented, and the two-stage implicit Runge-Kutta time integration keeps over third-order accuracy. The application of the method to the direct numerical simulation of turbulence demonstrates that the proposed scheme retains a resolution comparable to that of spectral methods even for relatively large Courant numbers. The scheme is further applied to the Local Mesh Refinement (LMR) method, where the size of the time step is often restricted by the dimension of the smallest meshes. In the computation of the Karman vortex street problem, the implicit IDO scheme with LMR is shown to allow a conspicuous saving of computational resources.
Protein structure refinement with adaptively restrained homologous replicas.
Della Corte, Dennis; Wildberg, André; Schröder, Gunnar F
2016-09-01
A novel protein refinement protocol is presented which utilizes molecular dynamics (MD) simulations of an ensemble of adaptively restrained homologous replicas. This approach adds evolutionary information to the force field and reduces random conformational fluctuations by coupling of several replicas. It is shown that this protocol refines the majority of models from the CASP11 refinement category and that larger conformational changes of the starting structure are possible than with current state of the art methods. The performance of this protocol in the CASP11 experiment is discussed. We found that the quality of the refined model is correlated with the structural variance of the coupled replicas, which therefore provides a good estimator of model quality. Furthermore, some remarkable refinement results are discussed in detail. Proteins 2016; 84(Suppl 1):302-313. © 2015 Wiley Periodicals, Inc. PMID:26441154
Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models
Carlberg, Kevin T.
2014-11-05
Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Schnieders, Michael J; Fenn, Timothy D; Pande, Vijay S
2011-04-12
Refinement of macromolecular models from X-ray crystallography experiments benefits from prior chemical knowledge at all resolutions. As the quality of the prior chemical knowledge from quantum or classical molecular physics improves, in principle so will resulting structural models. Due to limitations in computer performance and electrostatic algorithms, commonly used macromolecules X-ray crystallography refinement protocols have had limited support for rigorous molecular physics in the past. For example, electrostatics is often neglected in favor of nonbonded interactions based on a purely repulsive van der Waals potential. In this work we present advanced algorithms for desktop workstations that open the door to X-ray refinement of even the most challenging macromolecular data sets using state-of-the-art classical molecular physics. First we describe theory for particle mesh Ewald (PME) summation that consistently handles the symmetry of all 230 space groups, replicates of the unit cell such that the minimum image convention can be used with a real space cutoff of any size and the combination of space group symmetry with replicates. An implementation of symmetry accelerated PME for the polarizable atomic multipole optimized energetics for biomolecular applications (AMOEBA) force field is presented. Relative to a single CPU core performing calculations on a P1 unit cell, our AMOEBA engine called Force Field X (FFX) accelerates energy evaluations by more than a factor of 24 on an 8-core workstation with a Tesla GPU coprocessor for 30 structures that contain 240 000 atoms on average in the unit cell. The benefit of AMOEBA electrostatics evaluated with PME for macromolecular X-ray crystallography refinement is demonstrated via rerefinement of 10 crystallographic data sets that range in resolution from 1.7 to 4.5 Å. Beginning from structures obtained by local optimization without electrostatics, further optimization using AMOEBA with PME electrostatics improved
THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS
Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.
2012-01-01
We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.
Adaptive mesh generation for edge-element finite element method
NASA Astrophysics Data System (ADS)
Tsuboi, Hajime; Gyimothy, Szabolcs
2001-06-01
An adaptive mesh generation method for two- and three-dimensional finite element methods using edge elements is proposed. Since the tangential component continuity is preserved when using edge elements, the strategy of creating new nodes is based on evaluation of the normal component of the magnetic vector potential across element interfaces. The evaluation is performed at the middle point of edge of a triangular element for two-dimensional problems or at the gravity center of triangular surface of a tetrahedral element for three-dimensional problems. At the boundary of two elements, the error estimator is the ratio of the normal component discontinuity to the maximum value of the potential in the same material. One or more nodes are set at the middle points of the edges according to the value of the estimator as well as the subdivision of elements where new nodes have been created. A final mesh will be obtained after several iterations. Some computation results of two- and three-dimensional problems using the proposed method are shown.
Finite element adaptive mesh analysis using a cluster of workstations
NASA Astrophysics Data System (ADS)
Wang, K. P.; Bruch, J. C., Jr.
1998-01-01
Parallel computation on clusters of workstations is becoming one of the major trends in the study of parallel computations, because of their high computing speed, cost effectiveness and scalability. This paper presents studies of using a cluster of workstations for the finite element adaptive mesh analysis of a free surface seepage problem. A parallel algorithm proven to be simple to implement and efficient is used to perform the analysis. A network of workstations is used as the hardware of a parallel system. Two parallel software packages, P4 and PVM (parallel virtual machine), are used to handle communications among networked workstations. Computational issues to be discussed are domain decomposition, load balancing, and communication time.
Adaptive surface meshing and multiresolution terrain depiction for SVS
NASA Astrophysics Data System (ADS)
Wiesemann, Thorsten; Schiefele, Jens; Kubbat, Wolfgang
2001-08-01
Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly future Vertical Cut Displays or 3D Synthetic Vision Systems (SVS) require accurate and hi-resolution data to offer a reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a new method for adaptive terrain meshing and depiction for SVS is presented. The initial data set is decomposed by using a wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various Level-of-Detail is determined. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain data. This representation enhances fast interactive computations and real-time rendering methods. The proposed terrain representation is integrated into a standard navigation display. Due to the multi-resolution data organization, terrain depiction e.g. resolution is adaptive to a selected zooming level or flight phase. Moreover, the wavelet decomposition helps to define local regions of interest. A depicted terrain resolution has a finer grain nearby the current airplane position and gets coarser with increasing aircraft distance. In addition, flight critical regions can be depicted in a higher resolution.
Pascucci, V
2004-02-18
This paper presents a simple approach for rendering isosurfaces of a scalar field. Using the vertex programming capability of commodity graphics cards, we transfer the cost of computing an isosurface from the Central Processing Unit (CPU), running the main application, to the Graphics Processing Unit (GPU), rendering the images. We consider a tetrahedral decomposition of the domain and draw one quadrangle (quad) primitive per tetrahedron. A vertex program transforms the quad into the piece of isosurface within the tetrahedron (see Figure 2). In this way, the main application is only devoted to streaming the vertices of the tetrahedra from main memory to the graphics card. For adaptively refined rectilinear grids, the optimization of this streaming process leads to the definition of a new 3D space-filling curve, which generalizes the 2D Sierpinski curve used for efficient rendering of triangulated terrains. We maintain the simplicity of the scheme when constructing view-dependent adaptive refinements of the domain mesh. In particular, we guarantee the absence of T-junctions by satisfying local bounds in our nested error basis. The expensive stage of fixing cracks in the mesh is completely avoided. We discuss practical tradeoffs in the distribution of the workload between the application and the graphics hardware. With current GPU's it is convenient to perform certain computations on the main CPU. Beyond the performance considerations that will change with the new generations of GPU's this approach has the major advantage of avoiding completely the storage in memory of the isosurface vertices and triangles.
NASA Astrophysics Data System (ADS)
Combet, F.; Gelman, L.
2011-04-01
In this paper, a novel adaptive demodulation technique including a new diagnostic feature is proposed for gear diagnosis in conditions of variable amplitudes of the mesh harmonics. This vibration technique employs the time synchronous average (TSA) of vibration signals. The new adaptive diagnostic feature is defined as the ratio of the sum of the sideband components of the envelope spectrum of a mesh harmonic to the measured power of the mesh harmonic. The proposed adaptation of the technique is justified theoretically and experimentally by the high level of the positive covariance between amplitudes of the mesh harmonics and the sidebands in conditions of variable amplitudes of the mesh harmonics. It is shown that the adaptive demodulation technique preserves effectiveness of local fault detection of gears operating in conditions of variable mesh amplitudes.
NASA Astrophysics Data System (ADS)
de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit
2008-11-01
Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.
Multifluid adaptive-mesh simulation of the solar wind interaction with the local interstellar medium
Kryukov, I. A.; Borovikov, S. N.; Pogorelov, N. V.; Zank, G. P.
2006-09-26
DOE's SciDAC adaptive mesh refinement code Chombo has been modified for solution of compressible MHD flows with the application of high resolution, shock-capturing numerical schemes. The code developed is further extended to involve multiple fluids and applied to the problem of the solar wind interaction with the local interstellar medium. For this purpose, a set of MHD equations is solved together with a few sets of the Euler gas dynamics equations, depending on the number of neutral fluids included in the model. Our first results are presented that were obtained in the framework of an axially symmetric multifluid model which is applicable to magnetic-field-aligned flows. Details are shown of the generation and development of Rayleigh-Taylor and Kelvin-Helmholtz instabilities of the heliopause. A comparison is given of the results obtained with a two- and four-fluid models.
Analysis of hypersonic aircraft inlets using flow adaptive mesh algorithms
NASA Astrophysics Data System (ADS)
Neaves, Michael Dean
The numerical investigation into the dynamics of unsteady inlet flowfields is applied to a three-dimensional scramjet inlet-isolator-diffuser geometry designed for hypersonic type applications. The Reynolds-Averaged Navier-Stokes equations are integrated in time using a subiterating, time-accurate implicit algorithm. Inviscid fluxes are calculated using the Low Diffusion Flux Splitting Scheme of Edwards. A modified version of the dynamic solution-adaptive point movement algorithm of Benson and McRae is used in a coupled mode to dynamically resolve the features of the flow by enhancing the spatial accuracy of the simulations. The unsteady mesh terms are incorporated into the flow solver via the inviscid fluxes. The dynamic solution-adaptive grid algorithm of Benson and McRae is modified to improve orthogonality at the boundaries to ensure accurate application of boundary conditions and properly resolve turbulent boundary layers. Shock tube simulations are performed to ascertain the effectiveness of the algorithm for unsteady flow situations on fixed and moving grids. Unstarts due to a combustor and freestream angle of attack perturbations are simulated in a three-dimensional inlet-isolator-diffuser configuration.
Three-dimensional modeling and highly refined mesh generation of the aorta artery and its tunics
NASA Astrophysics Data System (ADS)
Cazotto, J. A.; Neves, L. A.; Machado, J. M.; Momente, J. C.; Shiyou, Y.; Godoy, M. F.; Zafalon, G. F. D.; Pinto, A. R.; Valêncio, C. R.
2013-02-01
This paper describes strategies and techniques to perform modeling and automatic mesh generation of the aorta artery and its tunics (adventitia, media and intima walls), using open source codes. The models were constructed in the Blender package and Python scripts were used to export the data necessary for the mesh generation in TetGen. The strategies proposed are able to provide meshes of complicated and irregular volumes, with a large number of mesh elements involved (12,000,000 tetrahedrons approximately). These meshes can be used to perform computational simulations by Finite Element Method (FEM).
White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification
NASA Astrophysics Data System (ADS)
Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun
2016-03-01
The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.
Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei
2011-01-01
Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356
Numerical study of Taylor bubbles with adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry
2014-11-01
The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.
Lee, W; Kim, T-S; Cho, M; Lee, S
2005-01-01
In studying bioelectromagnetic problems, finite element method offers several advantages over other conventional methods such as boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropy. Mesh generation is the first requirement in the finite element analysis and there are many different approaches in mesh generation. However conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes, resulting in numerous elements in the smaller volume regions, thereby increasing computational load and demand. In this work, we present an improved content-adaptive mesh generation scheme that is efficient and fast along with options to change the contents of meshes. For demonstration, mesh models of the head from a volume MRI are presented in 2-D and 3-D.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
The GeoClaw software for depth-averaged flows with adaptive refinement
Berger, M.J.; George, D.L.; LeVeque, R.J.; Mandli, K.T.
2011-01-01
Many geophysical flow or wave propagation problems can be modeled with two-dimensional depth-averaged equations, of which the shallow water equations are the simplest example. We describe the GeoClaw software that has been designed to solve problems of this nature, consisting of open source Fortran programs together with Python tools for the user interface and flow visualization. This software uses high-resolution shock-capturing finite volume methods on logically rectangular grids, including latitude-longitude grids on the sphere. Dry states are handled automatically to model inundation. The code incorporates adaptive mesh refinement to allow the efficient solution of large-scale geophysical problems. Examples are given illustrating its use for modeling tsunamis and dam-break flooding problems. Documentation and download information is available at www.clawpack.org/geoclaw. ?? 2011.
NASA Technical Reports Server (NTRS)
Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)
1994-01-01
The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.
Importance of dynamic mesh adaptivity for simulation of viscous fingering in porous media
NASA Astrophysics Data System (ADS)
Mostaghimi, P.; Jackson, M.; Pain, C.; Gorman, G.
2014-12-01
Viscous fingering is a major concern in many natural and engineered processes such as water flooding of heavy-oil reservoirs. Common reservoir simulators employ low-order finite volume/difference methods on structured grids to resolve this phenomenon. However, their approach suffers from a significant numerical dispersion error along the fingering patterns due to insufficient mesh resolution and smears out some important features of the flow. We propose use of an unstructured control volume finite element method for simulation of viscous fingering in porous media. Our approach is equipped with anisotropic mesh adaptivity where the mesh resolution is optimized based on the evolving features of flow. The adaptive algorithm uses a metric tensor field based on solution error estimates to locally control the size and shape of elements in the metric. We resolve the viscous fingering patterns accurately and reduce the numerical dispersion error significantly. The mesh optimization, generates an unstructured coarse mesh in other regions of the computational domain which significantly decreases the computational cost. The effect of grid resolution on the resolved fingers is thoroughly investigated. We analyze the computational cost of mesh adaptivty on unstructured mesh and compare it with common finite volume methods. The results of this study suggests that mesh adaptivity is an efficient and accurate approach for resolving complex behaviors and instabilities of flow in porous media such as viscous fingering.
Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y
2006-12-01
In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.
NASA Astrophysics Data System (ADS)
Jia, Jinhong; Wang, Hong
2015-10-01
Numerical methods for fractional differential equations generate full stiffness matrices, which were traditionally solved via Gaussian type direct solvers that require O (N3) of computational work and O (N2) of memory to store where N is the number of spatial grid points in the discretization. We develop a preconditioned fast Krylov subspace iterative method for the efficient and faithful solution of finite volume schemes defined on a locally refined composite mesh for fractional differential equations to resolve boundary layers of the solutions. Numerical results are presented to show the utility of the method.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Finite Element approach for Density Functional Theory calculations on locally refined meshes
Fattebert, J; Hornung, R D; Wissink, A M
2007-02-23
We present a quadratic Finite Element approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.
Finite Elements approach for Density Functional Theory calculations on locally refined meshes
Fattebert, J; Hornung, R D; Wissink, A M
2006-03-27
We present a quadratic Finite Elements approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.
Computations of Aerodynamic Performance Databases Using Output-Based Refinement
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.
2009-01-01
Objectives: Handle complex geometry problems; Control discretization errors via solution-adaptive mesh refinement; Focus on aerodynamic databases of parametric and optimization studies: 1. Accuracy: satisfy prescribed error bounds 2. Robustness and speed: may require over 105 mesh generations 3. Automation: avoid user supervision Obtain "expert meshes" independent of user skill; and Run every case adaptively in production settings.
Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption.
Dresel, T; Beyerlein, M; Schwider, J
1996-12-10
Computer-generated phase-only holograms can be used for laser beam shaping, i.e., for focusing a given aperture with intensity and phase distributions into a pregiven intensity pattern in their focal planes. A numerical approach based on iterative finite-element mesh adaption permits the design of appropriate phase functions for the task of focusing into two-dimensional reconstruction patterns. Both the hologram aperture and the reconstruction pattern are covered by mesh mappings. An iterative procedure delivers meshes with intensities equally distributed over the constituting elements. This design algorithm adds new elementary focuser functions to what we call object-oriented hologram design. Some design examples are discussed.
Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process
NASA Astrophysics Data System (ADS)
Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh
2016-06-01
Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.
NASA Astrophysics Data System (ADS)
D'Amato, Anthony M.
Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate
Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.
Adaptive unstructured meshing for thermal stress analysis of built-up structures
NASA Technical Reports Server (NTRS)
Dechaumphai, Pramote
1992-01-01
An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: A gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment and other accepted computational results for a series of low and moderate Reynolds number flows.
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1994-01-01
A Cartesian, cell-based approach for adaptively-refined solutions of the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, N-sided 'cut' cells are created using polygon-clipping algorithms. The grid is stored in a binary-tree structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive mesh refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded: a gradient-limited, linear reconstruction of the primitive variables is performed, providing input states to an approximate Riemann solver for computing the fluxes between neighboring cells. The more robust of a series of viscous flux functions is used to provide the viscous fluxes at the cell interfaces. Adaptively-refined solutions of the Navier-Stokes equations using the Cartesian, cell-based approach are obtained and compared to theory, experiment, and other accepted computational results for a series of low and moderate Reynolds number flows.
Towards a large-scale scalable adaptive heart model using shallow tree meshes
NASA Astrophysics Data System (ADS)
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
NASA Astrophysics Data System (ADS)
Burago, N. G.; Nikitin, I. S.; Yakushev, V. L.
2016-06-01
Techniques that improve the accuracy of numerical solutions and reduce their computational costs are discussed as applied to continuum mechanics problems with complex time-varying geometry. The approach combines shock-capturing computations with the following methods: (1) overlapping meshes for specifying complex geometry; (2) elastic arbitrarily moving adaptive meshes for minimizing the approximation errors near shock waves, boundary layers, contact discontinuities, and moving boundaries; (3) matrix-free implementation of efficient iterative and explicit-implicit finite element schemes; (4) balancing viscosity (version of the stabilized Petrov-Galerkin method); (5) exponential adjustment of physical viscosity coefficients; and (6) stepwise correction of solutions for providing their monotonicity and conservativeness.
NASA Astrophysics Data System (ADS)
Marty, Nicolas C. M.; Tournassat, Christophe; Burnol, André; Giffaut, Eric; Gaucher, Eric C.
2009-01-01
SummaryLarge quantities of cements and concretes need to be incorporated in geological disposal facilities for long-lived radwaste. An alkaline plume diffusing from an aged concrete (pH ˜ 12.5) through argillite-type rocks has been modelled considering feedback of porosity value variations on transport properties using the reactive transport code TOUGHREACT. The mineralogical composition of the argillite is modified at the interface with the concrete. Diffusion of cementitious elements leads to rapid and strong porosity occlusion in the argillite. Numerical results show that both reaction rates and spatial refinement affect mineralogical transformation pathways. The variations in porosity and the extension of the zone affected by the alkaline perturbation are compared at different times. The major effects of mineral precipitation under kinetic constraints, rather than local equilibrium, are a delay in the porosity clogging and an increase in the extension of the alkaline perturbation in the clay formation. The same time-delay rise for the porosity occlusion also appears for the roughest spatial resolutions. A simulation as representative as possible of temporal and spatial scales of cementation processes must then be supported by more comparative data such as long term experimental investigations or natural analogues.
Adaptive hp-FEM with dynamical meshes for transient heat and moisture transfer problems
NASA Astrophysics Data System (ADS)
Solin, Pavel; Dubcova, Lenka; Kruis, Jaroslav
2010-04-01
We are concerned with the time-dependent multiphysics problem of heat and moisture transfer in the context of civil engineering applications. The problem is challenging due to its multiscale nature (temperature usually propagates orders of magnitude faster than moisture), different characters of the two fields (moisture exhibits boundary layers which are not present in the temperature field), extremely long integration times (30 years or more), and lack of viable error control mechanisms. In order to solve the problem efficiently, we employ a novel multimesh adaptive higher-order finite element method (hp-FEM) based on dynamical meshes and adaptive time step control. We investigate the possibility to approximate the temperature and humidity fields on individual dynamical meshes equipped with mutually independent adaptivity mechanisms. Numerical examples related to a realistic nuclear reactor vessel simulation are presented.
Kolobov, Vladimir; Arslanbekov, Robert; Frolova, Anna
2014-12-09
The paper describes an Adaptive Mesh in Phase Space (AMPS) technique for solving kinetic equations with deterministic mesh-based methods. The AMPS technique allows automatic generation of adaptive Cartesian mesh in both physical and velocity spaces using a Tree-of-Trees data structure. We illustrate advantages of AMPS for simulations of rarefied gas dynamics and electron kinetics on low temperature plasmas. In particular, we consider formation of the velocity distribution functions in hypersonic flows, particle kinetics near oscillating boundaries, and electron kinetics in a radio-frequency sheath. AMPS provide substantial savings in computational cost and increased efficiency of the mesh-based kinetic solvers.
NASA Astrophysics Data System (ADS)
Kolobov, Vladimir; Arslanbekov, Robert; Frolova, Anna
2014-12-01
The paper describes an Adaptive Mesh in Phase Space (AMPS) technique for solving kinetic equations with deterministic mesh-based methods. The AMPS technique allows automatic generation of adaptive Cartesian mesh in both physical and velocity spaces using a Tree-of-Trees data structure. We illustrate advantages of AMPS for simulations of rarefied gas dynamics and electron kinetics on low temperature plasmas. In particular, we consider formation of the velocity distribution functions in hypersonic flows, particle kinetics near oscillating boundaries, and electron kinetics in a radio-frequency sheath. AMPS provide substantial savings in computational cost and increased efficiency of the mesh-based kinetic solvers.
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong; Yang, Yuanfan
2012-02-01
A novel three-dimensional (3-D) photorealistic texturing process is presented that applies a view-planning and view-sequencing algorithm to the 3-D coarse model to determine a set of best viewing angles for capturing the individual real-world objects/building's images. The best sequence of views will generate sets of visible edges in each view to serve as a guide for camera field shots by either manual adjustment or equipment alignment. The best view tries to cover as many objects/building surfaces as possible in one shot. This will lead to a smaller total number of shots taken for a complete model reconstruction requiring texturing with photo-realistic effects. The direct linear transformation method (DLT) is used for reprojection of 3-D model vertices onto a two-dimensional (2-D) images plane for actual texture mapping. Given this method, the actual camera orientations do not have to be unique and can be set arbitrarily without heavy and expensive positioning equipment. We also present results of a study on the texture-mapping precision as a function of the level of visible mesh subdivision. In addition, the control points selection for the DLT method used for reprojection of 3-D model vertices onto 2-D textured images is also investigated for its effects on mapping precision. By using DLT and perspective projection theories on a coarse model feature points, this technique will allow accurate 3-D texture mapping of refined model meshes of real-world buildings. The novel integration flow of this research not only greatly reduces the human labor and intensive equipment requirements of traditional methods, but also generates a more appealing photo-realistic appearance of reconstructed models, which is useful in many multimedia applications. The roles of view planning (VP) are multifold. VP can (1) reduce the repetitive texture-mapping computation load, (2) can present a set of visible model wireframe edges that can serve as a guide for images with sharp edges and
Vay, J.-L.; Friedman, A.; Grote, D.P.
2002-09-15
The numerical simulation of the driving beams in a heavy ion fusion power plant is a challenging task, and, despite rapid progress in computer power, one must consider the use of the most advanced numerical techniques. One of the difficulties of these simulations resides in the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation region, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the Adaptive-Mesh-Refinement (AMR) technique. We follow in this article the progress accomplished in the last few months in the merging of the AMR technique with Particle-In-Cell (PIC) method. This includes a detailed modeling of the Lampel-Tiefenback solution for the one-dimensional diode using novel techniques to suppress undesirable numerical oscillations and an AMR patch to follow the head of the particle distribution. We also report new results concerning the modeling of ion sources using the axisymmetric WARPRZ-AMR prototype showing the utility of an AMR patch resolving the emitter vicinity and the beam edge.
NASA Astrophysics Data System (ADS)
Walko, R. L.; Medvigy, D.; Avissar, R.
2013-12-01
regular model grid or (2) estimate the essential elements of the convective response from lookup table entries that were previously generated for similar environments using method (1). Obviously, method (2) is extremely efficient while method (1) is computationally intensive, so the key is to construct clever algorithms that enable method (2) to be used as often as possible. The method is self-learning in that as a model simulation progresses, the lookup table can grow and the search algorithm for selecting the best table entries can adapt to the growing table. We demonstrate applications of this method on the variable-resolution hexagonal grid of the Ocean-Land-Atmosphere Model (OLAM) for both idealized and realistic environments.
TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh
Schnack, D.D.; Lottati, I.; Mikic, Z.
1995-07-01
The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.
Maier, A.; Schmidt, W.; Iapichino, L.; Niemeyer, J. C.
2009-12-10
We present a numerical scheme for modeling unresolved turbulence in cosmological adaptive mesh refinement codes. As a first application, we study the evolution of turbulence in the intracluster medium (ICM) and in the core of a galaxy cluster. Simulations with and without subgrid scale (SGS) model are compared in detail. Since the flow in the ICM is subsonic, the global turbulent energy contribution at the unresolved length scales is smaller than 1% of the internal energy. We find that the production of turbulence is closely correlated with merger events occurring in the cluster environment, and its dissipation locally affects the cluster energy budget. Because of this additional source of dissipation, the core temperature is larger and the density is smaller in the presence of SGS turbulence than in the standard adiabatic run, resulting in a higher entropy core value.
Fluidity: a fully-unstructured adaptive mesh computational framework for geodynamics
NASA Astrophysics Data System (ADS)
Kramer, S. C.; Davies, D.; Wilson, C. R.
2010-12-01
Fluidity is a finite element, finite volume fluid dynamics model developed by the Applied Modelling and Computation Group at Imperial College London. Several features of the model make it attractive for use in geodynamics. A core finite element library enables the rapid implementation and investigation of new numerical schemes. For example, the function spaces used for each variable can be changed allowing properties of the discretisation, such as stability, conservation and balance, to be easily varied and investigated. Furthermore, unstructured, simplex meshes allow the underlying resolution to vary rapidly across the computational domain. Combined with dynamic mesh adaptivity, where the mesh is periodically optimised to the current conditions, this allows significant savings in computational cost over traditional chessboard-like structured mesh simulations [1]. In this study we extend Fluidity (using the Portable, Extensible Toolkit for Scientific Computation [PETSc, 2]) to Stokes flow problems relevant to geodynamics. However, due to the assumptions inherent in all models, it is necessary to properly verify and validate the code before applying it to any large-scale problems. In recent years this has been made easier by the publication of a series of ‘community benchmarks’ for geodynamic modelling. We discuss the use of several of these to help validate Fluidity [e.g. 3, 4]. The experimental results of Vatteville et al. [5] are then used to validate Fluidity against laboratory measurements. This test case is also used to highlight the computational advantages of using adaptive, unstructured meshes - significantly reducing the number of nodes and total CPU time required to match a fixed mesh simulation. References: 1. C. C. Pain et al. Comput. Meth. Appl. M, 190:3771-3796, 2001. doi:10.1016/S0045-7825(00)00294-2. 2. B. Satish et al. http://www.mcs.anl.gov/petsc/petsc-2/, 2001. 3. Blankenbach et al. Geophys. J. Int., 98:23-28, 1989. 4. Busse et al. Geophys
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.
2010-01-01
This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.
Refinement trajectory and determination of eigenstates by a wavelet based adaptive method
Pipek, Janos; Nagy, Szilvia
2006-11-07
The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary.
Transient thermal-structural analysis using adaptive unstructured remeshing and mesh movement
NASA Technical Reports Server (NTRS)
Dechaumphai, Pramote; Morgan, Kenneth
1990-01-01
An adaptive unstructured remeshing technique is applied to transient thermal-structural analysis. The effectiveness of the technique, together with the finite element method and an error estimation technique, is evaluated by two applications which have exact solutions: (1) the steady-state thermal analysis of a plate subjected to a highly localized surface heating, and (2) the transient thermal-structural analysis of a simulated convectively cooled leading edge subjected to a translating heat source. These applications demonstrate that the remeshing technique significantly reduces the problem size as well as the analysis solution error as compared to the results produced using standard structured meshes.
NASA Astrophysics Data System (ADS)
Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan
2016-09-01
This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.
NASA Astrophysics Data System (ADS)
Guo, Zhikui; Chen, Chao; Tao, Chunhui
2016-04-01
Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jackson, Karen E.; Lyle, Karen H.; Spellman, Regina L.
2006-01-01
A study was performed to examine the influence of varying mesh density on an LS-DYNA simulation of a rectangular-shaped foam projectile impacting the space shuttle leading edge Panel 6. The shuttle leading-edge panels are fabricated of reinforced carbon-carbon (RCC) material. During the study, nine cases were executed with all possible combinations of coarse, baseline, and fine meshes of the foam and panel. For each simulation, the same material properties and impact conditions were specified and only the mesh density was varied. In the baseline model, the shell elements representing the RCC panel are approximately 0.2-in. on edge, whereas the foam elements are about 0.5-in. on edge. The element nominal edge-length for the baseline panel was halved to create a fine panel (0.1-in. edge length) mesh and doubled to create a coarse panel (0.4-in. edge length) mesh. In addition, the element nominal edge-length of the baseline foam projectile was halved (0.25-in. edge length) to create a fine foam mesh and doubled (1.0-in. edge length) to create a coarse foam mesh. The initial impact velocity of the foam was 775 ft/s. The simulations were executed in LS-DYNA for 6 ms of simulation time. Contour plots of resultant panel displacement and effective stress in the foam were compared at four discrete time intervals. Also, time-history responses of internal and kinetic energy of the panel, kinetic and hourglass energy of the foam, and resultant contact force were plotted to determine the influence of mesh density.
Shou, Guofa; Xia, Ling; Jiang, Mingfeng; Wei, Qing; Liu, Feng; Crozier, Stuart
2009-05-01
The boundary element method (BEM) is a commonly used numerical approach to solve biomedical electromagnetic volume conductor models such as ECG and EEG problems, in which only the interfaces between various tissue regions need to be modeled. The quality of the boundary element discretization affects the accuracy of the numerical solution, and the construction of high-quality meshes is time-consuming and always problem-dependent. Adaptive BEM (aBEM) has been developed and validated as an effective method to tackle such problems in electromagnetic and mechanical fields, but has not been extensively investigated in the ECG problem. In this paper, the h aBEM, which produces refined meshes through adaptive adjustment of the elements' connection, is investigated for the ECG forward problem. Two different refinement schemes: adding one new node (SH1) and adding three new nodes (SH3), are applied for the h aBEM calculation. In order to save the computational time, the h-hierarchical aBEM is also used through the introduction of the h-hierarchical shape functions for SH3. The algorithms were evaluated with a single-layer homogeneous sphere model with assumed dipole sources and a geometrically realistic heart-torso model. The simulations showed that h aBEM can produce better mesh results and is more accurate and effective than the traditional BEM for the ECG problem. While with the same refinement scheme SH3, the h-hierarchical aBEM can save the computational costs about 9% compared to the implementation of standard h aBEM.
Computations of two- and three-dimensional flows using an adaptive mesh
NASA Astrophysics Data System (ADS)
Nakahashi, K.
1985-11-01
Two- and three-dimensional, steady and unsteady viscous flow fields are numerically simulated by solving the Navier-Stokes equations. A solution-adaptive-grid method is used to redistribute the grid points so as to improve the resolution of shock waves and shear layers without increasing the number of grid points. Flow fields considered include two-dimensional transonic flows about airfoils, two- and three-dimensional supersonic flow past an aerodynamic afterbody with a propulsive jet, supersonic flow over a blunt fin mounted on a wall, and supersonic flow over a bump. The computed results demonstrate a significant improvement in accuracy and quality of the solutions owing to the solution-adaptive mesh.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
NASA Astrophysics Data System (ADS)
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation
Dione, Ibrahima; Briffard, Thomas; Doyon, Nicolas
2016-01-01
In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674
Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.
Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas
2016-01-01
In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.
Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.
Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas
2016-01-01
In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674
Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow
NASA Astrophysics Data System (ADS)
Wood, William Alfred, III
production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.
NASA Astrophysics Data System (ADS)
Goffin, Mark A.; Baker, Christopher M. J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.
2013-06-01
This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k with directional dependence. General error estimators are derived for any given functional of the flux and applied to k to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.
Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.
2013-06-01
This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.
An Efficient Means of Adaptive Refinement Within Systems of Overset Grids
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1996-01-01
An efficient means of adaptive refinement within systems of overset grids is presented. Problem domains are segregated into near-body and off-body fields. Near-body fields are discretized via overlapping body-fitted grids that extend only a short distance from body surfaces. Off-body fields are discretized via systems of overlapping uniform Cartesian grids of varying levels of refinement. a novel off-body grid generation and management scheme provides the mechanism for carrying out adaptive refinement of off-body flow dynamics and solid body motion. The scheme allows for very efficient use of memory resources, and flow solvers and domain connectivity routines that can exploit the structure inherent to uniform Cartesian grids.
Query-driven visualization of time-varying adaptive mesh refinement data.
Gosink, Luke J; Anderson, John C; Bethel, E Wes; Joy, Kenneth I
2008-01-01
The visualization and analysis of AMR-based simulations is integral to the process of obtaining new insight in scientific research. We present a new method for performing query-driven visualization and analysis on AMR data, with specific emphasis on time-varying AMR data. Our work introduces a new method that directly addresses the dynamic spatial and temporal properties of AMR grids that challenge many existing visualization techniques. Further, we present the first implementation of query-driven visualization on the GPU that uses a GPU-based indexing structure to both answer queries and efficiently utilize GPU memory. We apply our method to two different science domains to demonstrate its broad applicability.
Using adaptive-mesh refinement in SCFT simulations of surfactant adsorption
NASA Astrophysics Data System (ADS)
Sides, Scott; Kumar, Rajeev; Jamroz, Ben; Crockett, Robert; Pletzer, Alex
2013-03-01
Adsorption of surfactants at interfaces is relevant to many applications such as detergents, adhesives, emulsions and ferrofluids. Atomistic simulations of interface adsorption are challenging due to the difficulty of modeling the wide range of length scales in these problems: the thin interface region in equilibrium with a large bulk region that serves as a reservoir for the adsorbed species. Self-consistent field theory (SCFT) has been extremely useful for studying the morphologies of dense block copolymer melts. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. However, even SCFT methods can be difficult to apply to systems in which small spatial regions might require finer resolution than most of the simulation grid (eg. interface adsorption and confinement). We will present results on interface adsorption simulations using PolySwift++, an object-oriented, polymer SCFT simulation code aided by the Tech-X Chompst library that enables via block-structured AMR calculations with PETSc.
FLY-FLASH: A Software Interface for Adaptive Mesh Refinement - Treecode Simulations .
NASA Astrophysics Data System (ADS)
Comparato, M.; Antonuccio, V.; Becciani, U.; Dubey, A.; Plewa, T.; Sheeler, D.
We present an interface that allows us the execution of cosmological simulations by combining the capabilities of two different codes: FLY, a parallel treecode for N-Body simulations, and FLASH, a code for numerical hydrodinamic.This task is reached without heavy modifications of the codes, but by means of an interface that handles the communications between them. The underlying hypothesis is that the two codes are only loosely coupled, i.e. they interact only by exchanging the information to build the gravitational potential.
Query-Driven Visualization of Time-Varying Adaptive Mesh Refinement Data
Gosink, Luke J.; Anderson, John C.; Bethel, E. Wes; Joy, Kenneth I.
2008-08-01
The visualization and analysis of AMR-based simulations is integral to the process of obtaining new insight in scientific research. We present a new method for performing query-driven visualization and analysis on AMR data, with specific emphasis on time-varying AMR data. Our work introduces a new method that directly addresses the dynamic spatial and temporal properties of AMR grids which challenge many existing visualization techniques. Further, we present the first implementation of query-driven visualization on the GPU that uses a GPU-based indexing structure to both answer queries and efficiently utilize GPU memory. We apply our method to two different science domains to demonstrate its broad applicability.
Requirements for mesh resolution in 3D computational hemodynamics.
Prakash, S; Ethier, C R
2001-04-01
Computational techniques are widely used for studying large artery hemodynamics. Current trends favor analyzing flow in more anatomically realistic arteries. A significant obstacle to such analyses is generation of computational meshes that accurately resolve both the complex geometry and the physiologically relevant flow features. Here we examine, for a single arterial geometry, how velocity and wall shear stress patterns depend on mesh characteristics. A well-validated Navier-Stokes solver was used to simulate flow in an anatomically realistic human right coronary artery (RCA) using unstructured high-order tetrahedral finite element meshes. Velocities, wall shear stresses (WSS), and wall shear stress gradients were computed on a conventional "high-resolution" mesh series (60,000 to 160,000 velocity nodes) generated with a commercial meshing package. Similar calculations were then performed in a series of meshes generated through an adaptive mesh refinement (AMR) methodology. Mesh-independent velocity fields were not very difficult to obtain for both the conventional and adaptive mesh series. However, wall shear stress fields, and, in particular, wall shear stress gradient fields, were much more difficult to accurately resolve. The conventional (nonadaptive) mesh series did not show a consistent trend towards mesh-independence of WSS results. For the adaptive series, it required approximately 190,000 velocity nodes to reach an r.m.s. error in normalized WSS of less than 10 percent. Achieving mesh-independence in computed WSS fields requires a surprisingly large number of nodes, and is best approached through a systematic solution-adaptive mesh refinement technique. Calculations of WSS, and particularly WSS gradients, show appreciable errors even on meshes that appear to produce mesh-independent velocity fields.
A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.
Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J
2009-11-28
In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.
A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media
Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.
2008-04-01
In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.
An adaptive grid refinement strategy for the simulation of negative streamers
Montijn, C. . E-mail: carolynne.montijn@cwi.nl; Hundsdorfer, W. . E-mail: willem.hundsdorfer@cwi.nl; Ebert, U. . E-mail: ute.ebert@cwi.nl
2006-12-10
The evolution of negative streamers during electric breakdown of a non-attaching gas can be described by a two-fluid model for electrons and positive ions. It consists of continuity equations for the charged particles including drift, diffusion and reaction in the local electric field, coupled to the Poisson equation for the electric potential. The model generates field enhancement and steep propagating ionization fronts at the tip of growing ionized filaments. An adaptive grid refinement method for the simulation of these structures is presented. It uses finite volume spatial discretizations and explicit time stepping, which allows the decoupling of the grids for the continuity equations from those for the Poisson equation. Standard refinement methods in which the refinement criterion is based on local error monitors fail due to the pulled character of the streamer front that propagates into a linearly unstable state. We present a refinement method which deals with all these features. Tests on one-dimensional streamer fronts as well as on three-dimensional streamers with cylindrical symmetry (hence effectively 2D for numerical purposes) are carried out successfully. Results on fine grids are presented, they show that such an adaptive grid method is needed to capture the streamer characteristics well. This refinement strategy enables us to adequately compute negative streamers in pure gases in the parameter regime where a physical instability appears: branching streamers.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
6th International Meshing Roundtable '97
White, D.
1997-09-01
The goal of the 6th International Meshing Roundtable is to bring together researchers and developers from industry, academia, and government labs in a stimulating, open environment for the exchange of technical information related to the meshing process. In the pas~ the Roundtable has enjoyed significant participation born each of these groups from a wide variety of countries. The Roundtable will consist of technical presentations from contributed papers and abstracts, two invited speakers, and two invited panels of experts discussing topics related to the development and use of automatic mesh generation tools. In addition, this year we will feature a "Bring Your Best Mesh" competition and poster session to encourage discussion and participation from a wide variety of mesh generation tool users. The schedule and evening social events are designed to provide numerous opportunities for informal dialog. A proceedings will be published by Sandia National Laboratories and distributed at the Roundtable. In addition, papers of exceptionally high quaIity will be submitted to a special issue of the International Journal of Computational Geometry and Applications. Papers and one page abstracts were sought that present original results on the meshing process. Potential topics include but are got limited to: Unstructured triangular and tetrahedral mesh generation Unstructured quadrilateral and hexahedral mesh generation Automated blocking and structured mesh generation Mixed element meshing Surface mesh generation Geometry decomposition and clean-up techniques Geometry modification techniques related to meshing Adaptive mesh refinement and mesh quality control Mesh visualization Special purpose meshing algorithms for particular applications Theoretical or novel ideas with practical potential Technical presentations from industrial researchers.
Goal functional evaluations for phase-field fracture using PU-based DWR mesh adaptivity
NASA Astrophysics Data System (ADS)
Wick, Thomas
2016-06-01
In this study, a posteriori error estimation and goal-oriented mesh adaptivity are developed for phase-field fracture propagation. Goal functionals are computed with the dual-weighted residual (DWR) method, which is realized by a recently introduced novel localization technique based on a partition-of-unity (PU). This technique is straightforward to apply since the weak residual is used. The influence of neighboring cells is gathered by the PU. Consequently, neither strong residuals nor jumps over element edges are required. Therefore, this approach facilitates the application of the DWR method to coupled (nonlinear) multiphysics problems such as fracture propagation. These developments then allow for a systematic investigation of the discretization error for certain quantities of interest. Specifically, our focus on the relationship between the phase-field regularization and the spatial discretization parameter in terms of goal functional evaluations is novel.
Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes
NASA Astrophysics Data System (ADS)
Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.
2014-05-01
Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting
Gutowski, William J.; Prusa, Joseph M.; Smolarkiewicz, Piotr K.
2012-05-08
This project had goals of advancing the performance capabilities of the numerical general circulation model EULAG and using it to produce a fully operational atmospheric global climate model (AGCM) that can employ either static or dynamic grid stretching for targeted phenomena. The resulting AGCM combined EULAG's advanced dynamics core with the "physics" of the NCAR Community Atmospheric Model (CAM). Effort discussed below shows how we improved model performance and tested both EULAG and the coupled CAM-EULAG in several ways to demonstrate the grid stretching and ability to simulate very well a wide range of scales, that is, multi-scale capability. We leveraged our effort through interaction with an international EULAG community that has collectively developed new features and applications of EULAG, which we exploited for our own work summarized here. Overall, the work contributed to over 40 peer-reviewed publications and over 70 conference/workshop/seminar presentations, many of them invited. 3a. EULAG Advances EULAG is a non-hydrostatic, parallel computational model for all-scale geophysical flows. EULAG's name derives from its two computational options: EULerian (flux form) or semi-LAGrangian (advective form). The model combines nonoscillatory forward-in-time (NFT) numerical algorithms with a robust elliptic Krylov solver. A signature feature of EULAG is that it is formulated in generalized time-dependent curvilinear coordinates. In particular, this enables grid adaptivity. In total, these features give EULAG novel advantages over many existing dynamical cores. For EULAG itself, numerical advances included refining boundary conditions and filters for optimizing model performance in polar regions. We also added flexibility to the model's underlying formulation, allowing it to work with the pseudo-compressible equation set of Durran in addition to EULAG's standard anelastic formulation. Work in collaboration with others also extended the demonstrated range of
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.
2011-06-01
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
Turinsky, Paul
2015-02-09
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can be realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial
NASA Technical Reports Server (NTRS)
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
NASA Astrophysics Data System (ADS)
Lu, Yao; Ye, Hongwei; Xu, Yuesheng; Hu, Xiaofei; Vogelsang, Levon; Shen, Lixin; Feiglin, David; Lipson, Edward; Krol, Andrzej
2008-03-01
To improve the speed and quality of ordered-subsets expectation-maximization (OSEM) SPECT reconstruction, we have implemented a content-adaptive, singularity-based, mesh-domain, image model (CASMIM) with an accurate algorithm for estimation of the mesh-domain system matrix. A preliminary image, used to initialize CASMIM reconstruction, was obtained using pixel-domain OSEM. The mesh-domain representation of the image was produced by a 2D wavelet transform followed by Delaunay triangulation to obtain joint estimation of nodal locations and their activity values. A system matrix with attenuation compensation was investigated. Digital chest phantom SPECT was simulated and reconstructed. The quality of images reconstructed with OSEM-CASMIM is comparable to that from pixel-domain OSEM, but images are obtained five times faster by the CASMIM method.
Mesh Optimization for Monte Carlo-Based Optical Tomography
Edmans, Andrew; Intes, Xavier
2015-01-01
Mesh-based Monte Carlo techniques for optical imaging allow for accurate modeling of light propagation in complex biological tissues. Recently, they have been developed within an efficient computational framework to be used as a forward model in optical tomography. However, commonly employed adaptive mesh discretization techniques have not yet been implemented for Monte Carlo based tomography. Herein, we propose a methodology to optimize the mesh discretization and analytically rescale the associated Jacobian based on the characteristics of the forward model. We demonstrate that this method maintains the accuracy of the forward model even in the case of temporal data sets while allowing for significant coarsening or refinement of the mesh. PMID:26566523
Zhang, Yu; Prakash, Edmond C; Sung, Eric
2004-01-01
This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.
NASA Astrophysics Data System (ADS)
Zhang, Shuang; Parashar, Manish; Zabusky, Norman
2001-11-01
We merge the PPM compressible algorithm (VH-1 (M. Parashar, Grid Adaptive Computational Engine. 2001. http://www.caip.rutgers.edu/ parashar/TASSL/Projects/GrACE/Gmain.html)) with the new Grid Adaptive Computation Engine (GrACE ( J. M. Blondin and J. Hawley, Virginia Hydrodynamics Code. http://wonka.physics.ncsu.edu/pub/VH-1/index.html)). The latter environment uses the Berger-Oliger AMR algorithm and has many high-performance computation features such as data parallelism, data and computation locality, etc. We discuss the performance (scaling) resulting from examining the space of four parameters: top coarse level resolution; number of refinement levels; number of processors; duration of calculation. We validate the new code by applying it to the 2D shock-curtain interaction problem (N. J. Zabusky and S. Zhang. "Shock - planar curtain interactions in 2D: Emergence of vortex double layers, vortex projectiles and decaying stratified turbulence." Revised submitted Physics of Fluids, July, 2001.). We discuss the visualization and quantification of AMR data sets.
An adaptive computation mesh for the solution of singular perturbation problems
NASA Technical Reports Server (NTRS)
Brackbill, J. U.; Saltzman, J.
1980-01-01
In singular perturbation problems, control of zone size variation can affect the effort required to obtain accurate, numerical solutions of finite difference equations. The mesh is generated by the solution of potential equations. Numerical results for a singular perturbation problem in two dimensions are presented. The mesh was used in calculations of resistive magnetohydrodynamic flow in two dimensions.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.; Nixon, David (Technical Monitor)
1998-01-01
The work presents a new method for on-the-fly domain decomposition technique for mapping grids and solution algorithms to parallel machines, and is applicable to both shared-memory and message-passing architectures. It will be demonstrated on the Cray T3E, HP Exemplar, and SGI Origin 2000. Computing time has been secured on all these platforms. The decomposition technique is an outgrowth of techniques used in computational physics for simulations of N-body problems and the event horizons of black holes, and has not been previously used by the CFD community. Since the technique offers on-the-fly partitioning, it offers a substantial increase in flexibility for computing in heterogeneous environments, where the number of available processors may not be known at the time of job submission. In addition, since it is dynamic it permits the job to be repartitioned without global communication in cases where additional processors become available after the simulation has begun, or in cases where dynamic mesh adaptation changes the mesh size during the course of a simulation. The platform for this partitioning strategy is a completely new Cartesian Euler solver tarcreted at parallel machines which may be used in conjunction with Ames' "Cart3D" arbitrary geometry simulation package.
NASA Technical Reports Server (NTRS)
Kleb, William L.; Batina, John T.; Williams, Marc H.
1990-01-01
A temporal adaptive algorithm for the time-integration of the two-dimensional Euler or Navier-Stokes equations is presented. The flow solver involves an upwind flux-split spatial discretization for the convective terms and central differencing for the shear-stress and heat flux terms on an unstructured mesh of triangles. The temporal adaptive algorithm is a time-accurate integration procedure which allows flows with high spatial and temporal gradients to be computed efficiently by advancing each grid cell near its maximum allowable time step. Results indicate that an appreciable computational savings can be achieved for both inviscid and viscous unsteady airfoil problems using unstructured meshes without degrading spatial or temporal accuracy.
NASA Astrophysics Data System (ADS)
Adam, A.; Pavlidis, D.; Percival, J. R.; Salinas, P.; Xie, Z.; Fang, F.; Pain, C. C.; Muggeridge, A. H.; Jackson, M. D.
2016-09-01
A general, higher-order, conservative and bounded interpolation for the dynamic and adaptive meshing of control-volume fields dual to continuous and discontinuous finite element representations is presented. Existing techniques such as node-wise interpolation are not conservative and do not readily generalise to discontinuous fields, whilst conservative methods such as Grandy interpolation are often too diffusive. The new method uses control-volume Galerkin projection to interpolate between control-volume fields. Bounded solutions are ensured by using a post-interpolation diffusive correction. Example applications of the method to interface capturing during advection and also to the modelling of multiphase porous media flow are presented to demonstrate the generality and robustness of the approach.
NASA Astrophysics Data System (ADS)
Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.
2015-10-01
An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.
NASA Astrophysics Data System (ADS)
Rosenberg, Duane; Fournier, Aimé; Fischer, Paul; Pouquet, Annick
2006-06-01
An object-oriented geophysical and astrophysical spectral-element adaptive refinement (GASpAR) code is introduced. Like most spectral-element codes, GASpAR combines finite-element efficiency with spectral-method accuracy. It is also designed to be flexible enough for a range of geophysics and astrophysics applications where turbulence or other complex multiscale problems arise. The formalism accommodates both conforming and non-conforming elements. Several aspects of this code derive from existing methods, but here are synthesized into a new formulation of dynamic adaptive refinement (DARe) of non-conforming h-type. As a demonstration of the code, several new 2D test cases are introduced that have time-dependent analytic solutions and exhibit localized flow features, including the 2D Burgers equation with straight, curved-radial and oblique-colliding fronts. These are proposed as standard test problems for comparable DARe codes. Quantitative errors are reported for 2D spatial and temporal convergence of DARe.
SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method
Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X
2015-06-15
Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.
Numerical Modelling of Volcanic Ash Settling in Water Using Adaptive Unstructured Meshes
NASA Astrophysics Data System (ADS)
Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R.
2011-12-01
At the bottom of the world's oceans lies layer after layer of ash deposited from past volcanic eruptions. Correct interpretation of these layers can provide important constraints on the duration and frequency of volcanism, but requires a full understanding of the complex multi-phase settling and deposition process. Analogue experiments of tephra settling through a tank of water demonstrate that small ash particles can either settle individually, or collectively as a gravitationally unstable ash-laden plume. These plumes are generated when the concentration of particles exceeds a certain threshold such that the density of the tephra-water mixture is sufficiently large relative to the underlying particle-free water for a gravitational Rayleigh-Taylor instability to develop. These ash-laden plumes are observed to descend as a vertical density current at a velocity much greater than that of single particles, which has important implications for the emplacement of tephra deposits on the seabed. To extend the results of laboratory experiments to large scales and explore the conditions under which vertical density currents may form and persist, we have developed a multi-phase extension to Fluidity, a combined finite element / control volume CFD code that uses adaptive unstructured meshes. As a model validation, we present two- and three-dimensional simulations of tephra plume formation in a water tank that replicate laboratory experiments (Carey, 1997, doi:10.1130/0091-7613(1997)025<0839:IOCSOT>2.3.CO;2). An inflow boundary condition at the top of the domain allows particles to flux in at a constant rate of 0.472 gm-2s-1, forming a near-surface layer of tephra particles, which initially settle individually at the predicted Stokes velocity of 1.7 mms-1. As more tephra enters the water and the particle concentration increases, the layer eventually becomes unstable and plumes begin to form, descending with velocities more than ten times greater than those of individual
Axisymmetric modeling of cometary mass loading on an adaptively refined grid: MHD results
NASA Technical Reports Server (NTRS)
Gombosi, Tamas I.; Powell, Kenneth G.; De Zeeuw, Darren L.
1994-01-01
The first results of an axisymmetric magnetohydrodynamic (MHD) model of the interaction of an expanding cometary atmosphere with the solar wind are presented. The model assumes that far upstream the plasma flow lines are parallel to the magnetic field vector. The effects of mass loading and ion-neutral friction are taken into account by the governing equations, whcih are solved on an adaptively refined unstructured grid using a Monotone Upstream Centered Schemes for Conservative Laws (MUSCL)-type numerical technique. The combination of the adaptive refinement with the MUSCL-scheme allows the entire cometary atmosphere to be modeled, while still resolving both the shock and the near nucleus of the comet. The main findingsare the following: (1) A shock is formed approximately = 0.45 Mkm upstream of the comet (its location is controlled by the sonic and Alfvenic Mach numbers of the ambient solar wind flow and by the cometary mass addition rate). (2) A contact surface is formed approximately = 5,600 km upstream of the nucleus separating an outward expanding cometary ionosphere from the nearly stagnating solar wind flow. The location of the contact surface is controlled by the upstream flow conditions, the mass loading rate and the ion-neutral drag. The contact surface is also the boundary of the diamagnetic cavity. (3) A closed inner shock terminates the supersonic expansion of the cometary ionosphere. This inner shock is closer to the nucleus on dayside than on the nightside.
Numerical investigation of BB-AMR scheme using entropy production as refinement criterion
NASA Astrophysics Data System (ADS)
Altazin, Thomas; Ersoy, Mehmet; Golay, Frédéric; Sous, Damien; Yushchenko, Lyudmyla
2016-03-01
In this work, a parallel finite volume scheme on unstructured meshes is applied to fluid flow for multidimensional hyperbolic system of conservation laws. It is based on a block-based adaptive mesh refinement strategy which allows quick meshing and easy parallelisation. As a continuation and as an extension of a previous work, the useful numerical density of entropy production is used as mesh refinement criterion combined with a local time-stepping method to preserve the computational time. Then, we numerically investigate its efficiency through several test cases with a confrontation with exact solution or experimental data.
Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Dynamic Load Balancing for Adaptive Unstructured Grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for computing unsteady three-dimensional problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture phenomena of interest, such procedures make standard computational methods more cost effective. Highly refined meshes are required to accurately capture shock waves, contact discontinuities, vortices, and shear layers in fluid flow problems. Adaptive meshes have also proved to be useful in several other areas of computational science and engineering like computer vision and graphics, semiconductor device modeling, and structural mechanics. Local mesh adaptation provides the opportunity to obtain solutions that are comparable to those obtained on globally-refined grids but at a much lower cost. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Weller, Hilary; Browne, Philip; Budd, Chris; Cullen, Mike
2016-03-01
An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tessellations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.
Crane, N K; Parsons, I D; Hjelmstad, K D
2002-03-21
Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.
NASA Astrophysics Data System (ADS)
Foks, Nathan Leon
The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.
Parallel Adaptive Multi-Mechanics Simulations using Diablo
Parsons, D; Solberg, J
2004-12-03
Coupled multi-mechanics simulations (such as thermal-stress and fluidstructure interaction problems) are of substantial interest to engineering analysts. In addition, adaptive mesh refinement techniques present an attractive alternative to current mesh generation procedures and provide quantitative error bounds that can be used for model verification. This paper discusses spatially adaptive multi-mechanics implicit simulations using the Diablo computer code. (U)
Divett, T; Vennell, R; Stevens, C
2013-02-28
At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel. PMID:23319710
An adaptive-mesh finite-difference solution method for the Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Luchini, Paolo
1987-02-01
An adjustable variable-spacing grid is presented which permits the addition or deletion of single points during iterative solutions of the Navier-Stokes equations by finite difference methods. The grid is designed for application to two-dimensional steady-flow problems which can be described by partial differential equations whose second derivatives are constrained to the Laplacian operator. An explicit Navier-Stokes equations solution technique defined for use with the grid incorporates a hybrid form of the convective terms. Three methods are developed for automatic modifications of the mesh during calculations.
NASA Astrophysics Data System (ADS)
Kimura, Satoshi; Candy, Adam S.; Holland, Paul R.; Piggott, Matthew D.; Jenkins, Adrian
2013-07-01
Several different classes of ocean model are capable of representing floating glacial ice shelves. We describe the incorporation of ice shelves into Fluidity-ICOM, a nonhydrostatic finite-element ocean model with the capacity to utilize meshes that are unstructured and adaptive in three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting and freezing on all ice-shelf surfaces including vertical faces, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any of the additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. The model can also represent a water column that decreases to zero thickness at the 'grounding line', where the floating ice shelf is joined to its tributary ice streams. The model is applied to idealised ice-shelf geometries in order to demonstrate these capabilities. In these simple experiments, arbitrarily coarsening the mesh outside the ice-shelf cavity has little effect on the ice-shelf melt rate, while the mesh resolution within the cavity is found to be highly influential. Smoothing the vertical ice front results in faster flow along the smoothed ice front, allowing greater exchange with the ocean than in simulations with a realistic ice front. A vanishing water-column thickness at the grounding line has little effect in the simulations studied. We also investigate the response of ice shelf basal melting to variations in deep water temperature in the presence of salt stratification.
Multi-level adaptive particle mesh (MLAPM): a c code for cosmological simulations
NASA Astrophysics Data System (ADS)
Knebe, Alexander; Green, Andrew; Binney, James
2001-08-01
We present a computer code written in c that is designed to simulate structure formation from collisionless matter. The code is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The time-step shortens by a factor of 2 with each successive refinement. Competing approaches to N-body simulation are discussed from the point of view of the basic theory of N-body simulation. It is argued that an appropriate choice of softening length ɛ is of great importance and that ɛ should be at all points an appropriate multiple of the local interparticle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement. We show that at early times and low densities in cosmological simulations, ɛ needs to be significantly smaller relative to the interparticle separation than in virialized regions. Tests of the ability of the code's Poisson solver to recover the gravitational fields of both virialized haloes and Zel'dovich waves are presented, as are tests of the code's ability to reproduce analytic solutions for plane-wave evolution. The times required to conduct a ΛCDM cosmological simulation for various configurations are compared with the times required to complete the same simulation with the ART, AP3M and GADGET codes. The power spectra, halo mass functions and halo-halo correlation functions of simulations conducted with different codes are compared. The code is available from http://www-thphys.physics.ox.ac.uk/users/MLAPM.
A more efficient anisotropic mesh adaptation for the computation of Lagrangian coherent structures
NASA Astrophysics Data System (ADS)
Fortin, A.; Briffard, T.; Garon, A.
2015-03-01
The computation of Lagrangian coherent structures is more and more used in fluid mechanics to determine subtle fluid flow structures. We present in this paper a new adaptive method for the efficient computation of Finite Time Lyapunov Exponent (FTLE) from which the coherent Lagrangian structures can be obtained. This new adaptive method considerably reduces the computational burden without any loss of accuracy on the FTLE field.
Adaptive techniques in electrical impedance tomography reconstruction.
Li, Taoran; Isaacson, David; Newell, Jonathan C; Saulnier, Gary J
2014-06-01
We present an adaptive algorithm for solving the inverse problem in electrical impedance tomography. To strike a balance between the accuracy of the reconstructed images and the computational efficiency of the forward and inverse solvers, we propose to combine an adaptive mesh refinement technique with the adaptive Kaczmarz method. The iterative algorithm adaptively generates the optimal current patterns and a locally-refined mesh given the conductivity estimate and solves for the unknown conductivity distribution with the block Kaczmarz update step. Simulation and experimental results with numerical analysis demonstrate the accuracy and the efficiency of the proposed algorithm.
An accuracy assessment of Cartesian-mesh approaches for the Euler equations
NASA Technical Reports Server (NTRS)
Coirier, William J.; Powell, Kenneth G.
1995-01-01
A critical assessment of the accuracy of Cartesian-mesh approaches for steady, transonic solutions of the Euler equations of gas dynamics is made. An exact solution of the Euler equations (Ringleb's flow) is used not only to infer the order of the truncation error of the Cartesian-mesh approaches, but also to compare the magnitude of the discrete error directly to that obtained with a structured mesh approach. Uniformly and adaptively refined solutions using a Cartesian-mesh approach are obtained and compared to each other and to uniformly refined structured mesh results. The effect of cell merging is investigated as well as the use of two different K-exact reconstruction procedures. The solution methodology of the schemes is explained and tabulated results are presented to compare the solution accuracies.
Feasibility of electrical impedance tomography in haemorrhagic stroke treatment using adaptive mesh
NASA Astrophysics Data System (ADS)
Nasehi Tehrani, J.; Anderson, C.; Jin, C.; van Schaik, A.; Holder, D.; McEwan, A.
2010-04-01
EIT has been proposed for acute stroke differentiation, specifically to determine the type of stroke, either ischaemia (clot) or haemorrhage (bleed) to allow the rapid use of clot-busting drugs in the former (Romsauerova et al 2006) . This addresses an important medical need, although there is little treatment offered in the case of haemorrhage. Also the demands on EIT are high with usually no availability to take a 'before' measurement, ruling out time difference imaging. Recently a new treatment option for haemorrhage has been proposed and is being studied in international randomised controlled trial: the early reduction of elevated blood pressure to attenuate the haematoma. This has been shown via CT to reduce bleeds by up to 1mL by Anderson et al 2008. The use of EIT as a continuous measure is desirable here to monitor the effect of blood pressure reduction. A 1mL increase of haemorrhagic lesion located near scalp on the right side of head caused a boundary voltage change of less than 0.05% at 50 kHz. This could be visually observed in a time difference 3D reconstruction with no change in electrode positions, mesh, background conductivity or drift when baseline noise was less than 0.005% but not when noise was increased to 0.01%. This useful result informs us that the EIT system must have noise of less than 0.005% at 50 kHz including instrumentation, physiological and other biases.
Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion
NASA Astrophysics Data System (ADS)
Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning
2015-03-01
Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.
Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution
Delzanno, G L; Finn, J M
2009-01-01
Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.
Solution adaptive grids applied to low Reynolds number flow
NASA Astrophysics Data System (ADS)
de With, G.; Holdø, A. E.; Huld, T. A.
2003-08-01
A numerical study has been undertaken to investigate the use of a solution adaptive grid for flow around a cylinder in the laminar flow regime. The main purpose of this work is twofold. The first aim is to investigate the suitability of a grid adaptation algorithm and the reduction in mesh size that can be obtained. Secondly, the uniform asymmetric flow structures are ideal to validate the mesh structures due to mesh refinement and consequently the selected refinement criteria. The refinement variable used in this work is a product of the rate of strain and the mesh cell size, and contains two variables Cm and Cstr which determine the order of each term. By altering the order of either one of these terms the refinement behaviour can be modified.
NASA Astrophysics Data System (ADS)
Masterlark, T.; Lu, Z.; Rykhus, R.
2003-12-01
We construct finite element models (FEMs) of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine volcano, Alaska. Interferometric synthetic aperture radar (InSAR) imagery documents the consistent contraction of the PFD during 1992-2000. Three-dimensional problem domains of the FEMs include an elastic substrate overlain by a thermoelastic material representing the PFD. The geometry of the substrate is determined from a digital elevation model (DEM) and bathymetry data. The thickness of the PFD is initially determined from the difference between post- and pre-eruptive DEMs. Systematic prediction errors suggest the PFD thickness distribution, estimated from the DEM difference, is inaccurate. We combine InSAR images, FEMs, and an adaptive mesh algorithm to re-estimate the geometry of the PFD and optimize the thickness distribution for the PFD. Prediction errors from the FEM that includes an optimized PFD geometry are reduced by 20% with respect to those from an FEM that includes a PFD geometry derived from the DEM difference.
Adaptive node techniques for Maxwell's equations
Hewett, D W
2000-04-01
The computational mesh in numerical simulation provides a framework on which to monitor the spatial dependence of function and their derivatives. Spatial mesh is therefore essential to the ability to integrate systems in time without loss of fidelity. Several philosophies have emerged to provide such fidelity (Eulerian, Lagrangian, Arbitrary Lagrangian Eulerian ALE, Adaptive Mesh Refinement AMR, and adaptive node generation/deletion). Regardless of the type of mesh, a major difficulty is in setting up the initial mesh. Clearly a high density of grid points is essential in regions of high geometric complexity and/or regions of intense, energetic activity. For some problems, mesh generation is such a crucial part of the problem that it can take as much computational effort as the run itself, and these tasks are now taking weeks of massively parallel CPU time. Mesh generation is no less crucial to electromagnetic calculations. In fact EM problem set up can be even more challenging without the clues given by fluid motion in hydrodynamic systems. When the mesh is advected with the fluid (Lagrangian), mesh points naturally congregate in regions of high activity. Similarly in AMR algorithms, strong gradients in the fluid flow are one of the triggers for mesh refinement. In the hyperbolic Maxwell's equations without advection, mesh point placement/motion is not so intuitive. In fixed geometry systems, it at least feasible to finely mesh high leverage, geometrically challenged areas. For other systems, where the action takes place far from the boundaries and, likely, changes position in time, the options are limited to either using a high resolution (expensive) mesh in all regions that could require such resolution or adaptively generating nodes to resolve the physics as it evolves. The authors have developed a new time of adaptive node technique for Maxwell's equations to deal with this set of issues.
Jakeman, J.D. Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
NASA Astrophysics Data System (ADS)
Skutnik, Steven E.; Davis, David R.
2016-05-01
The use of passive gamma and neutron signatures from fission indicators is a common means of estimating used fuel burnup, enrichment, and cooling time. However, while characteristic fission product signatures such as 134Cs, 137Cs, 154Eu, and others are generally reliable estimators for used fuel burnup within the context where the assembly initial enrichment and the discharge time are known, in the absence of initial enrichment and/or cooling time information (such as when applying NDA measurements in a safeguards/verification context), these fission product indicators no longer yield a unique solution for assembly enrichment, burnup, and cooling time after discharge. Through the use of a new Mesh-Adaptive Direct Search (MADS) algorithm, it is possible to directly probe the shape of this "degeneracy space" characteristic of individual nuclides (and combinations thereof), both as a function of constrained parameters (such as the assembly irradiation history) and unconstrained parameters (e.g., the cooling time before measurement and the measurement precision for particular indicator nuclides). In doing so, this affords the identification of potential means of narrowing the uncertainty space of potential assembly enrichment, burnup, and cooling time combinations, thereby bounding estimates of assembly plutonium content. In particular, combinations of gamma-emitting nuclides with distinct half-lives (e.g., 134Cs with 137Cs and 154Eu) in conjunction with gross neutron counting (via 244Cm) are able to reasonably constrain the degeneracy space of possible solutions to a space small enough to perform useful discrimination and verification of fuel assemblies based on their irradiation history.
Visualization of AMR data with multi-level dual-mesh interpolation.
Moran, Patrick J; Ellsworth, David
2011-12-01
We present a new technique for providing interpolation within cell-centered Adaptive Mesh Refinement (AMR) data that achieves C(0) continuity throughout the 3D domain. Our technique improves on earlier work in that it does not require that adjacent patches differ by at most one refinement level. Our approach takes the dual of each mesh patch and generates "stitching cells" on the fly to fill the gaps between dual meshes. We demonstrate applications of our technique with data from Enzo, an AMR cosmological structure formation simulation code. We show ray-cast visualizations that include contributions from particle data (dark matter and stars, also output by Enzo) and gridded hydrodynamic data. We also show results from isosurface studies, including surfaces in regions where adjacent patches differ by more than one refinement level.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
Maltby, John; Day, Liz; Hall, Sophie
2015-01-01
The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197
Maltby, John; Day, Liz; Hall, Sophie
2015-01-01
The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being.
Maltby, John; Day, Liz; Hall, Sophie
2015-01-01
The current paper presents a new measure of trait resilience derived from three common mechanisms identified in ecological theory: Engineering, Ecological and Adaptive (EEA) resilience. Exploratory and confirmatory factor analyses of five existing resilience scales suggest that the three trait resilience facets emerge, and can be reduced to a 12-item scale. The conceptualization and value of EEA resilience within the wider trait and well-being psychology is illustrated in terms of differing relationships with adaptive expressions of the traits of the five-factor personality model and the contribution to well-being after controlling for personality and coping, or over time. The current findings suggest that EEA resilience is a useful and parsimonious model and measure of trait resilience that can readily be placed within wider trait psychology and that is found to contribute to individual well-being. PMID:26132197
A mesh density study for application to large deformation rolling process evaluations
Martin, J.A.
1997-12-01
When addressing large deformation through an elastic-plastic analysis the mesh density is paramount in determining the accuracy of the solution. However, given the nonlinear nature of the problem, a highly-refined mesh will generally require a prohibitive amount of computer resources. This paper addresses finite element mesh optimization studies considering accuracy of results and computer resource needs as applied to large deformation rolling processes. In particular, the simulation of the thread rolling manufacturing process is considered using the MARC software package and a Cray C90 supercomputer. Both mesh density and adaptive meshing on final results for both indentation of a rigid body to a specified depth and contact rolling along a predetermined length are evaluated.
Determination of an Initial Mesh Density for Finite Element Computations via Data Mining
Kanapady, R; Bathina, S K; Tamma, K K; Kamath, C; Kumar, V
2001-07-23
Numerical analysis software packages which employ a coarse first mesh or an inadequate initial mesh need to undergo a cumbersome and time consuming mesh refinement studies to obtain solutions with acceptable accuracy. Hence, it is critical for numerical methods such as finite element analysis to be able to determine a good initial mesh density for the subsequent finite element computations or as an input to a subsequent adaptive mesh generator. This paper explores the use of data mining techniques for obtaining an initial approximate finite element density that avoids significant trial and error to start finite element computations. As an illustration of proof of concept, a square plate which is simply supported at its edges and is subjected to a concentrated load is employed for the test case. Although simplistic, the present study provides insight into addressing the above considerations.
Neale, Richard B.
2015-12-01
In this project we analyze climate simulations using the Community Earth System Model (CESM) in order to determine the modeled response and sensitivity to horizontal resolution. Simple aqua-planet configurations were used to provide a clean comparison of the response to resolution in CESM. This enables us to easily examine all aspects of the model sensitivity to resolution including mean quantities, variability and physical parameterization tendencies: the chief reflection of resolution sensitivity. An extension to the global resolution sensitivity study is the examination of regional grid refinement where resolution changes are prescribed in a single global simulation. We examine the relevance of the global resolution sensitivity results as applied to these regional refinement simulations. In particular we examine how variations in the grid resolution, centered on different parts of the globe, lead to differences in the parameterized response and the potential to generate residual circulations as a result. Given the potential to generate this resolution sensitivity we examine simple modifications to the parameterized physics that are able to moderate any residual circulations. Finally, we transfer the framework to the standard AMIP configuration to examine the resolution sensitivity in the presence of compounding effects such as land-sea distributions, orography and seasonal variation.
NASA Astrophysics Data System (ADS)
Gill, Stuart P. D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.
2003-04-01
An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological "market" today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.
Algorithm refinement for fluctuating hydrodynamics
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
McCorquodale, Peter; Ullrich, Paul; Johansen, Hans; Colella, Phillip
2015-09-04
We present a high-order finite-volume approach for solving the shallow-water equations on the sphere, using multiblock grids on the cubed-sphere. This approach combines a Runge--Kutta time discretization with a fourth-order accurate spatial discretization, and includes adaptive mesh refinement and refinement in time. Results of tests show fourth-order convergence for the shallow-water equations as well as for advection in a highly deformational flow. Hierarchical adaptive mesh refinement allows solution error to be achieved that is comparable to that obtained with uniform resolution of the most refined level of the hierarchy, but with many fewer operations.
Self-Avoiding Walks Over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1999-01-01
Space-filling curves is a popular approach based on a geometric embedding for linearizing computational meshes. We present a new O(n log n) combinatorial algorithm for constructing a self avoiding walk through a two dimensional mesh containing n triangles. We show that for hierarchical adaptive meshes, the algorithm can be locally adapted and easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the runtime partitioning and load balancing of adaptive unstructured grids.
Method of modifying a volume mesh using sheet insertion
Borden, Michael J.; Shepherd, Jason F.
2006-08-29
A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet insertion. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify (refine) the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of locating a sheet of hexahedral mesh elements, determining a plurality of hexahedral elements within the sheet to refine, shrinking the plurality of elements, and inserting a new sheet of hexahedral elements adjacently to modify the volume mesh. Additionally, another predetermined algorithm using mesh cutting may be followed to modify a volume mesh.
The finite cell method for polygonal meshes: poly-FCM
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2016-10-01
In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
Self-Avoiding Walks over Adaptive Triangular Grids
NASA Technical Reports Server (NTRS)
Heber, Gerd; Biswas, Rupak; Gao, Guang R.; Saini, Subhash (Technical Monitor)
1998-01-01
In this paper, we present a new approach to constructing a "self-avoiding" walk through a triangular mesh. Unlike the popular approach of visiting mesh elements using space-filling curves which is based on a geometric embedding, our approach is combinatorial in the sense that it uses the mesh connectivity only. We present an algorithm for constructing a self-avoiding walk which can be applied to any unstructured triangular mesh. The complexity of the algorithm is O(n x log(n)), where n is the number of triangles in the mesh. We show that for hierarchical adaptive meshes, the algorithm can be easily parallelized by taking advantage of the regularity of the refinement rules. The proposed approach should be very useful in the run-time partitioning and load balancing of adaptive unstructured grids.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated
NASA Astrophysics Data System (ADS)
Bogdanov, P. B.; Gorobets, A. V.; Sukov, S. A.
2013-08-01
The design of efficient algorithms for large-scale gas dynamics computations with hybrid (heterogeneous) computing systems whose high performance relies on massively parallel accelerators is addressed. A high-order accurate finite volume algorithm with polynomial reconstruction on unstructured hybrid meshes is used to compute compressible gas flows in domains of complex geometry. The basic operations of the algorithm are implemented in detail for massively parallel accelerators, including AMD and NVIDIA graphics processing units (GPUs). Major optimization approaches and a computation transfer technique are covered. The underlying programming tool is the Open Computing Language (OpenCL) standard, which performs on accelerators of various architectures, both existing and emerging.
An interoperable, data-structure-neutral component for mesh query and manipulation.
Ollivier-Gooch, C.; Diachin, L.; Shephard, M. S.; Tautges, T.; Kraftcheck, J.; Leung, V.; Luo, X.; Miller, M.
2010-01-01
Much of the effort required to create a new simulation code goes into developing infrastructure for mesh data manipulation, adaptive refinement, design optimization, and so forth. This infrastructure is an obvious target for code reuse, except that implementations of these functionalities are typically tied to specific data structures. In this article, we describe a software component---an abstract data model and programming interface---designed to provide low-level mesh query and manipulation support for meshing and solution algorithms. The component's data model provides a data abstraction, completely hiding all details of how mesh data is stored, while its interface defines how applications can interact with that data. Because the component has been carefully designed to be general purpose and efficient, it provides a practical platform for implementing high-level mesh operations independently of the underlying mesh data structures. After describing the data model and interface, we provide several usage examples, each of which has been used successfully with multiple implementations of the interface functionality. The overhead due to accessing mesh data through the interface rather than directly accessing the underlying mesh data is shown to be acceptably small.
An arbitrary boundary triangle mesh generation method for multi-modality imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanxuan; Deng, Yong; Gong, Hui; Meng, Yuanzheng; Yang, Xiaoquan; Luo, Qingming
2011-11-01
Low-resolution and ill-posedness are the major challenges in diffuse optical tomography(DOT)/fluorescence molecular tomography(FMT). Recently, the multi-modality imaging technology that combines micro-computed tomography (micro-CT) with DOT/FMT is developed to improve resolution and ill-posedness. To take advantage of the fine priori anatomical maps obtained from micro-CT, we present an arbitrary boundary triangle mesh generation method for FMT/DOT/micro-CT multi-modality imaging. A planar straight line graph (PSLG) based on the image of micro-CT is obtained by an adaptive boundary sampling algorithm. The subregions of mesh are accurately matched with anatomical structures by a two-step solution, firstly, the triangles and nodes during mesh refinement are labeled respectively, and then a revising algorithm is used to modifying meshes of each subregion. The triangle meshes based on a regular model and a micro-CT image are generated respectively. The results show that the subregions of triangle meshes can match with anatomical structures accurately and triangle meshes have good quality. This provides an arbitrary boundaries triangle mesh generation method with the ability to incorporate the fine priori anatomical information into DOT/FMT reconstructions.
An arbitrary boundary triangle mesh generation method for multi-modality imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanxuan; Deng, Yong; Gong, Hui; Meng, Yuanzheng; Yang, Xiaoquan; Luo, Qingming
2012-03-01
Low-resolution and ill-posedness are the major challenges in diffuse optical tomography(DOT)/fluorescence molecular tomography(FMT). Recently, the multi-modality imaging technology that combines micro-computed tomography (micro-CT) with DOT/FMT is developed to improve resolution and ill-posedness. To take advantage of the fine priori anatomical maps obtained from micro-CT, we present an arbitrary boundary triangle mesh generation method for FMT/DOT/micro-CT multi-modality imaging. A planar straight line graph (PSLG) based on the image of micro-CT is obtained by an adaptive boundary sampling algorithm. The subregions of mesh are accurately matched with anatomical structures by a two-step solution, firstly, the triangles and nodes during mesh refinement are labeled respectively, and then a revising algorithm is used to modifying meshes of each subregion. The triangle meshes based on a regular model and a micro-CT image are generated respectively. The results show that the subregions of triangle meshes can match with anatomical structures accurately and triangle meshes have good quality. This provides an arbitrary boundaries triangle mesh generation method with the ability to incorporate the fine priori anatomical information into DOT/FMT reconstructions.
Multigrid techniques for unstructured meshes
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1995-01-01
An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.
Estimator reduction and convergence of adaptive BEM.
Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk
2012-06-01
A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.
Estimator reduction and convergence of adaptive BEM
Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk
2012-01-01
A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations. PMID:23482248
Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm
Melander, D.J.; Benzley, S.E.; Tautges, T.J.
1997-06-01
The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.
Layher, Georg; Schrodt, Fabian; Butz, Martin V.; Neumann, Heiko
2014-01-01
The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, both of which are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in computational neuroscience. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of additional (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the proposed combination of an associative memory with a modulatory feedback integration successfully establishes category and subcategory representations
Lazarov, R; Pasciak, J; Jones, J
2002-02-01
Construction, analysis and numerical testing of efficient solution techniques for solving elliptic PDEs that allow for parallel implementation have been the focus of the research. A number of discretization and solution methods for solving second order elliptic problems that include mortar and penalty approximations and domain decomposition methods for finite elements and finite volumes have been investigated and analyzed. Techniques for parallel domain decomposition algorithms in the framework of PETC and HYPRE have been studied and tested. Hierarchical parallel grid refinement and adaptive solution methods have been implemented and tested on various model problems. A parallel code implementing the mortar method with algebraically constructed multiplier spaces was developed.
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
2010-03-01
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Evaluation of Different Meshing Techniques for the Case of a Stented Artery.
Lotfi, Azadeh; Simmons, Anne; Barber, Tracie
2016-03-01
The formation and progression of in-stent restenosis (ISR) in bifurcated vessels may vary depending on the technique used for stenting. This study evaluates the effect of a variety of mesh styles on the accuracy and reliability of computational fluid dynamics (CFD) models in predicting these regions, using an idealized stented nonbifurcated model. The wall shear stress (WSS) and the near-stent recirculating vortices are used as determinants. The meshes comprise unstructured tetrahedral and polyhedral elements. The effects of local refinement, as well as higher-order elements such as prismatic inflation layers and internal hexahedral core, have also been examined. The uncertainty associated with individual mesh style was assessed through verification of calculations using the grid convergence index (GCI) method. The results obtained show that the only condition which allows the reliable comparison of uncertainty estimation between different meshing styles is that the monotonic convergence of grid solutions is in the asymptotic range. Comparisons show the superiority of a flow-adaptive polyhedral mesh over the commonly used adaptive and nonadaptive tetrahedral meshes in terms of resolving the near-stent flow features, GCI value, and prediction of WSS. More accurate estimation of hemodynamic factors was obtained using higher-order elements, such as hexahedral or prismatic grids. Incorporating these higher-order elements, however, was shown to introduce some degrees of numerical diffusion at the transitional area between the two meshes, not necessarily translating into high GCI value. Our data also confirmed the key role of local refinement in improving the performance and accuracy of nonadaptive mesh in predicting flow parameters in models of stented artery. The results of this study can provide a guideline for modeling biofluid domain in complex bifurcated arteries stented in regards to various stenting techniques.
Evaluation of Different Meshing Techniques for the Case of a Stented Artery.
Lotfi, Azadeh; Simmons, Anne; Barber, Tracie
2016-03-01
The formation and progression of in-stent restenosis (ISR) in bifurcated vessels may vary depending on the technique used for stenting. This study evaluates the effect of a variety of mesh styles on the accuracy and reliability of computational fluid dynamics (CFD) models in predicting these regions, using an idealized stented nonbifurcated model. The wall shear stress (WSS) and the near-stent recirculating vortices are used as determinants. The meshes comprise unstructured tetrahedral and polyhedral elements. The effects of local refinement, as well as higher-order elements such as prismatic inflation layers and internal hexahedral core, have also been examined. The uncertainty associated with individual mesh style was assessed through verification of calculations using the grid convergence index (GCI) method. The results obtained show that the only condition which allows the reliable comparison of uncertainty estimation between different meshing styles is that the monotonic convergence of grid solutions is in the asymptotic range. Comparisons show the superiority of a flow-adaptive polyhedral mesh over the commonly used adaptive and nonadaptive tetrahedral meshes in terms of resolving the near-stent flow features, GCI value, and prediction of WSS. More accurate estimation of hemodynamic factors was obtained using higher-order elements, such as hexahedral or prismatic grids. Incorporating these higher-order elements, however, was shown to introduce some degrees of numerical diffusion at the transitional area between the two meshes, not necessarily translating into high GCI value. Our data also confirmed the key role of local refinement in improving the performance and accuracy of nonadaptive mesh in predicting flow parameters in models of stented artery. The results of this study can provide a guideline for modeling biofluid domain in complex bifurcated arteries stented in regards to various stenting techniques. PMID:26784359
Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D
Pavel Kus; Pavel Solin; David Andrs
2014-11-01
In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.
Masterlark, Timothy; Lu, Zhiming; Rykhus, Russ
2006-01-01
Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992-1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 ??C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE=2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE=0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 ?? 107 m3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500-800 ??C. ?? 2005 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Masterlark, Timothy; Lu, Zhong; Rykhus, Russell
2006-02-01
Interferometric synthetic aperture radar (InSAR) imagery documents the consistent subsidence, during the interval 1992-1999, of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine Volcano, Alaska. We construct finite element models (FEMs) that simulate thermoelastic contraction of the PFD to account for the observed subsidence. Three-dimensional problem domains of the FEMs include a thermoelastic PFD embedded in an elastic substrate. The thickness of the PFD is initially determined from the difference between post- and pre-eruption digital elevation models (DEMs). The initial excess temperature of the PFD at the time of deposition, 640 °C, is estimated from FEM predictions and an InSAR image via standard least-squares inverse methods. Although the FEM predicts the major features of the observed transient deformation, systematic prediction errors (RMSE = 2.2 cm) are most likely associated with errors in the a priori PFD thickness distribution estimated from the DEM differences. We combine an InSAR image, FEMs, and an adaptive mesh algorithm to iteratively optimize the geometry of the PFD with respect to a minimized misfit between the predicted thermoelastic deformation and observed deformation. Prediction errors from an FEM, which includes an optimized PFD geometry and the initial excess PFD temperature estimated from the least-squares analysis, are sub-millimeter (RMSE = 0.3 mm). The average thickness (9.3 m), maximum thickness (126 m), and volume (2.1 × 10 7 m 3) of the PFD, estimated using the adaptive mesh algorithm, are about twice as large as the respective estimations for the a priori PFD geometry. Sensitivity analyses suggest unrealistic PFD thickness distributions are required for initial excess PFD temperatures outside of the range 500-800 °C.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2004-01-28
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
A Dynamically Adaptive Arbitrary Lagrangian-Eulerian Method for Hydrodynamics
Anderson, R W; Pember, R B; Elliott, N S
2002-10-19
A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. The novel components of the combined ALE-AMR method hinge upon the integration of traditional AMR techniques with both staggered grid Lagrangian operators as well as elliptic relaxation operators on moving, deforming mesh hierarchies. Numerical examples demonstrate the utility of the method in performing detailed three-dimensional shock-driven instability calculations.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Automatic Mesh Coarsening for Discrete Ordinates Codes
Turner, Scott A.
1999-03-11
This paper describes the use of a ''mesh potential'' function for automatic coarsening of meshes in discrete ordinates neutral particle transport codes. For many transport calculations, a user may find it helpful to have the code determine a ''good'' neutronics mesh. The complexity of a problem involving millions of mesh cells, dozens of materials, and many energy groups makes it difficult to determine an adequate level of mesh refinement with a minimum number of cells. A method has been implemented in PARTISN (Parallel Time-dependent SN) to calculate a ''mesh potential'' in each original cell of a problem, and use this information to determine the maximum coarseness allowed in the mesh while maintaining accuracy in the solution. Results are presented for a simple x-y-z fuel/control/reflector problem.
Adaptive numerical methods for partial differential equations
Cololla, P.
1995-07-01
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
NASA Astrophysics Data System (ADS)
Biotteau, E.; Gravouil, A.; Lubrecht, A. A.; Combescure, A.
2012-01-01
In this paper, the refinement strategy based on the "Non-Linear Localized Full MultiGrid" solver originally published in Int. J. Numer. Meth. Engng 84(8):947-971 (2010) for 2-D structural problems is extended to 3-D simulations. In this context, some extra information concerning the refinement strategy and the behavior of the error indicators are given. The adaptive strategy is dedicated to the accurate modeling of elastoplastic materials with isotropic hardening in transient dynamics. A multigrid solver with local mesh refinement is used to reduce the amount of computational work needed to achieve an accurate calculation at each time step. The locally refined grids are automatically constructed, depending on the user prescribed accuracy. The discretization error is estimated by a dedicated error indicator within the multigrid method. In contrast to other adaptive procedures, where grids are erased when new ones are generated, the previous solutions are used recursively to reduce the computing time on the new mesh. Moreover, the adaptive strategy needs no costly coarsening method as the mesh is reassessed at each time step. The multigrid strategy improves the convergence rate of the non-linear solver while ensuring the information transfer between the different meshes. It accounts for the influence of localized non-linearities on the whole structure. All the steps needed to achieve the adaptive strategy are automatically performed within the solver such that the calculation does not depend on user experience. This paper presents three-dimensional results using the adaptive multigrid strategy on elastoplastic structures in transient dynamics and in a linear geometrical framework. Isoparametric cubic elements with energy and plastic work error indicators are used during the calculation.
Milne, R.B.
1995-12-01
This thesis describes a new method for the numerical solution of partial differential equations of the parabolic type on an adaptively refined mesh in two or more spatial dimensions. The method is motivated and developed in the context of the level set formulation for the curvature dependent propagation of surfaces in three dimensions. In that setting, it realizes the multiple advantages of decreased computational effort, localized accuracy enhancement, and compatibility with problems containing a range of length scales.
Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0
NASA Technical Reports Server (NTRS)
Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine
2004-01-01
We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.
NASA Technical Reports Server (NTRS)
Coirier, William John
1994-01-01
A Cartesian, cell-based scheme for solving the Euler and Navier-Stokes equations in two dimensions is developed and tested. Grids about geometrically complicated bodies are generated automatically, by recursive subdivision of a single Cartesian cell encompassing the entire flow domain. Where the resulting cells intersect bodies, polygonal 'cut' cells are created. The geometry of the cut cells is computed using polygon-clipping algorithms. The grid is stored in a binary-tree data structure which provides a natural means of obtaining cell-to-cell connectivity and of carrying out solution-adaptive refinement. The Euler and Navier-Stokes equations are solved on the resulting grids using a finite-volume formulation. The convective terms are upwinded, with a limited linear reconstruction of the primitive variables used to provide input states to an approximate Riemann solver for computing the fluxes between neighboring cells. A multi-stage time-stepping scheme is used to reach a steady-state solution. Validation of the Euler solver with benchmark numerical and exact solutions is presented. An assessment of the accuracy of the approach is made by uniform and adaptive grid refinements for a steady, transonic, exact solution to the Euler equations. The error of the approach is directly compared to a structured solver formulation. A non smooth flow is also assessed for grid convergence, comparing uniform and adaptively refined results. Several formulations of the viscous terms are assessed analytically, both for accuracy and positivity. The two best formulations are used to compute adaptively refined solutions of the Navier-Stokes equations. These solutions are compared to each other, to experimental results and/or theory for a series of low and moderate Reynolds numbers flow fields. The most suitable viscous discretization is demonstrated for geometrically-complicated internal flows. For flows at high Reynolds numbers, both an altered grid-generation procedure and a
NASA Astrophysics Data System (ADS)
Garel, F.; Davies, R.; Goes, S. D.; Davies, J.; Lithgow-Bertelloni, C. R.; Stixrude, L. P.
2012-12-01
Seismic observations show a wide range of slab morphologies within the mantle transition zone. This zone is likely to have been critical in Earth's thermal and chemical evolution, acting as a 'valve' that controls material transfer between the upper and lower mantle. However, the interaction between slabs and this complex region remains poorly understood. The complexity arises from non-linear and multi-scale interactions between several aspects of the mantle system, including mineral phase changes and material rheology. In this study, we will utilize new, multi-scale geodynamic models to determine what controls the seismically observed variability in slab behavior within the mantle transition zone and, hence, the down-going branch of the mantle 'valve'. Our models incorporate the newest mineral physics and theoretical constraints on density, phase proportions and rheology. In addition we exploit novel and unique adaptive grid methodologies to provide the resolution necessary to capture rapid changes in material properties in and around the transition zone. Our early results, which will be presented, illustrate the advantages of the new modelling technique for studying subduction including the effects of changes in material properties and mineral phases.
NASA Astrophysics Data System (ADS)
Blatov, I. A.; Dobrobog, N. V.; Kitaeva, E. V.
2016-07-01
The Galerkin finite element method is applied to nonself-adjoint singularly perturbed boundary value problems on Shishkin meshes. The Galerkin projection method is used to obtain conditionally ɛ-uniform a priori error estimates and to prove the convergence of a sequence of meshes in the case of an unknown boundary layer edge.
Adaptive isogeometric analysis based on a combined r-h strategy
NASA Astrophysics Data System (ADS)
Basappa, Umesh; Rajagopal, Amirtham; Reddy, J. N.
2016-03-01
In the present work, an r-h adaptive isogeometric analysis is proposed for plane elasticity problems. For performing the r-adaption, the control net is considered to be a network of springs with the individual spring stiffness values being proportional to the error estimated at the control points. While preserving the boundary control points, relocation of only the interior control points is made by adopting a successive relaxation approach to achieve the equilibrium of spring system. To suit the noninterpolatory nature of the isogeometric approximation, a new point-wise error estimate for the h-refinement is proposed. To evaluate the point-wise error, hierarchical B-spline functions in Sobolev spaces are considered. The proposed adaptive h-refinement strategy is based on using De-Casteljau's algorithm for obtaining the new control points. The subsequent control meshes are thus obtained by using a recursive subdivision of reference control mesh. Such a strategy ensures that the control points lie in the physical domain in subsequent refinements, thus making the physical mesh to exactly interpolate the control mesh and thereby allowing the exact imposition of essential boundary conditions in the classical isogeometric analysis (IGA). The combined r-h adaptive refinement strategy results in better convergence characteristics with reduced errors than r- or h-refinement. Several numerical examples are presented to illustrate the efficiency of the proposed approach.
NASA Astrophysics Data System (ADS)
Bennett, Beth Anne V.; Fielding, Joseph; Mauro, Richard J.; Long, Marshall B.; Smooke, Mitchell D.
1999-12-01
Axisymmetric laminar methane-air Bunsen flames are computed for two equivalence ratios: lean (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 0.776), in which the traditional Bunsen cone forms above the burner; and rich (icons/Journals/Common/Phi" ALT="Phi" ALIGN="TOP"/> = 1.243), in which the premixed Bunsen cone is accompanied by a diffusion flame halo located further downstream. Because the extremely large gradients at premixed flame fronts greatly exceed those in diffusion flames, their resolution requires a more sophisticated adaptive numerical method than those ordinarily applied to diffusion flames. The local rectangular refinement (LRR) solution-adaptive gridding method produces robust unstructured rectangular grids, utilizes multiple-scale finite-difference discretizations, and incorporates Newton's method to solve elliptic partial differential equation systems simultaneously. The LRR method is applied to the vorticity-velocity formulation of the fully elliptic governing equations, in conjunction with detailed chemistry, multicomponent transport and an optically-thin radiation model. The computed lean flame is lifted above the burner, and this liftoff is verified experimentally. For both lean and rich flames, grid spacing greatly influences the Bunsen cone's position, which only stabilizes with adequate refinement. In the rich configuration, the oxygen-free region above the Bunsen cone inhibits the complete decay of CH4, thus indirectly initiating the diffusion flame halo where CO oxidizes to CO2. In general, the results computed by the LRR method agree quite well with those obtained on equivalently refined conventional grids, yet the former require less than half the computational resources.
Numerical simulation of H2/air detonation using unstructured mesh
NASA Astrophysics Data System (ADS)
Togashi, Fumiya; Löhner, Rainald; Tsuboi, Nobuyuki
2009-06-01
To explore the capability of unstructured mesh to simulate detonation wave propagation phenomena, numerical simulation of H2/air detonation using unstructured mesh was conducted. The unstructured mesh has several adv- antages such as easy mesh adaptation and flexibility to the complicated configurations. To examine the resolution dependency of the unstructured mesh, several simulations varying the mesh size were conducted and compared with a computed result using a structured mesh. The results show that the unstructured mesh solution captures the detailed structure of detonation wave, as well as the structured mesh solution. To capture the detailed detonation cell structure, the unstructured mesh simulations required at least twice, ideally 5times the resolution of structured mesh solution.
2010-10-05
MeshKit is an open-source library of mesh generation functionality. MeshKit has general mesh manipulation and generation functions such as Copoy, Move, Rotate and Extrude mesh. In addition, new quad mesh and embedded boundary Cartesian mesh algorithm (EB Mesh) are included. Interfaces to several public domain meshing algorithms (TetGen, netgen, triangle, Gmsh, camal) are also offered. This library interacts with mesh data mostly through iMesh including accessing the mesh in parallel. It also can interact with iGeom interface to provide geometry functionality such as importing solid model based geometries. iGeom and IMesh are implemented in the CGM and MOAB packages, respectively. For some non-existing function in iMesh such as tree-construction and ray-tracing, MeshKit also interacts with MOAB functions directly.
2010-10-05
MeshKit is an open-source library of mesh generation functionality. MeshKit has general mesh manipulation and generation functions such as Copoy, Move, Rotate and Extrude mesh. In addition, new quad mesh and embedded boundary Cartesian mesh algorithm (EB Mesh) are included. Interfaces to several public domain meshing algorithms (TetGen, netgen, triangle, Gmsh, camal) are also offered. This library interacts with mesh data mostly through iMesh including accessing the mesh in parallel. It also can interact withmore » iGeom interface to provide geometry functionality such as importing solid model based geometries. iGeom and IMesh are implemented in the CGM and MOAB packages, respectively. For some non-existing function in iMesh such as tree-construction and ray-tracing, MeshKit also interacts with MOAB functions directly.« less
Quadrilateral/hexahedral finite element mesh coarsening
Staten, Matthew L; Dewey, Mark W; Scott, Michael A; Benzley, Steven E
2012-10-16
A technique for coarsening a finite element mesh ("FEM") is described. This technique includes identifying a coarsening region within the FEM to be coarsened. Perimeter chords running along perimeter boundaries of the coarsening region are identified. The perimeter chords are redirected to create an adaptive chord separating the coarsening region from a remainder of the FEM. The adaptive chord runs through mesh elements residing along the perimeter boundaries of the coarsening region. The adaptive chord is then extracted to coarsen the FEM.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Algorithm refinement for stochastic partial differential equations.
Alexander, F. J.; Garcia, Alejandro L.,; Tartakovsky, D. M.
2001-01-01
A hybrid particle/continuum algorithm is formulated for Fickian diffusion in the fluctuating hydrodynamic limit. The particles are taken as independent random walkers; the fluctuating diffusion equation is solved by finite differences with deterministic and white-noise fluxes. At the interface between the particle and continuum computations the coupling is by flux matching, giving exact mass conservation. This methodology is an extension of Adaptive Mesh and Algorithm Refinement to stochastic partial differential equations. A variety of numerical experiments were performed for both steady and time-dependent scenarios. In all cases the mean and variance of density are captured correctly by the stochastic hybrid algorithm. For a non-stochastic version (i.e., using only deterministic continuum fluxes) the mean density is correct, but the variance is reduced except within the particle region, far from the interface. Extensions of the methodology to fluid mechanics applications are discussed.
Adaptive Multilinear Tensor Product Wavelets.
Weiss, Kenneth; Lindstrom, Peter
2016-01-01
Many foundational visualization techniques including isosurfacing, direct volume rendering and texture mapping rely on piecewise multilinear interpolation over the cells of a mesh. However, there has not been much focus within the visualization community on techniques that efficiently generate and encode globally continuous functions defined by the union of multilinear cells. Wavelets provide a rich context for analyzing and processing complicated datasets. In this paper, we exploit adaptive regular refinement as a means of representing and evaluating functions described by a subset of their nonzero wavelet coefficients. We analyze the dependencies involved in the wavelet transform and describe how to generate and represent the coarsest adaptive mesh with nodal function values such that the inverse wavelet transform is exactly reproduced via simple interpolation (subdivision) over the mesh elements. This allows for an adaptive, sparse representation of the function with on-demand evaluation at any point in the domain. We focus on the popular wavelets formed by tensor products of linear B-splines, resulting in an adaptive, nonconforming but crack-free quadtree (2D) or octree (3D) mesh that allows reproducing globally continuous functions via multilinear interpolation over its cells.
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge
Niski, K; Purnomo, B; Cohen, J
2006-11-06
Previous algorithms for view-dependent level of detail provide local mesh refinements either at the finest granularity or at a fixed, coarse granularity. The former provides triangle-level adaptation, often at the expense of heavy CPU usage and low triangle rendering throughput; the latter improves CPU usage and rendering throughput by operating on groups of triangles. We present a new multiresolution hierarchy and associated algorithms that provide adaptive granularity. This multi-grained hierarchy allows independent control of the number of hierarchy nodes processed on the CPU and the number of triangles to be rendered on the GPU. We employ a seamless texture atlas style of geometry image as a GPU-friendly data organization, enabling efficient rendering and GPU-based stitching of patch borders. We demonstrate our approach on both large triangle meshes and terrains with up to billions of vertices.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Spherical geodesic mesh generation
Fung, Jimmy; Kenamond, Mark Andrew; Burton, Donald E.; Shashkov, Mikhail Jurievich
2015-02-27
In ALE simulations with moving meshes, mesh topology has a direct influence on feature representation and code robustness. In three-dimensional simulations, modeling spherical volumes and features is particularly challenging for a hydrodynamics code. Calculations on traditional spherical meshes (such as spin meshes) often lead to errors and symmetry breaking. Although the underlying differencing scheme may be modified to rectify this, the differencing scheme may not be accessible. This work documents the use of spherical geodesic meshes to mitigate solution-mesh coupling. These meshes are generated notionally by connecting geodesic surface meshes to produce triangular-prismatic volume meshes. This mesh topology is fundamentally different from traditional mesh topologies and displays superior qualities such as topological symmetry. This work describes the geodesic mesh topology as well as motivating demonstrations with the FLAG hydrocode.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-11-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
A moving mesh unstaggered constrained transport scheme for magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Mocz, Philip; Pakmor, Rüdiger; Springel, Volker; Vogelsberger, Mark; Marinacci, Federico; Hernquist, Lars
2016-08-01
We present a constrained transport (CT) algorithm for solving the 3D ideal magnetohydrodynamic (MHD) equations on a moving mesh, which maintains the divergence-free condition on the magnetic field to machine-precision. Our CT scheme uses an unstructured representation of the magnetic vector potential, making the numerical method simple and computationally efficient. The scheme is implemented in the moving mesh code AREPO. We demonstrate the performance of the approach with simulations of driven MHD turbulence, a magnetized disc galaxy, and a cosmological volume with primordial magnetic field. We compare the outcomes of these experiments to those obtained with a previously implemented Powell divergence-cleaning scheme. While CT and the Powell technique yield similar results in idealized test problems, some differences are seen in situations more representative of astrophysical flows. In the turbulence simulations, the Powell cleaning scheme artificially grows the mean magnetic field, while CT maintains this conserved quantity of ideal MHD. In the disc simulation, CT gives slower magnetic field growth rate and saturates to equipartition between the turbulent kinetic energy and magnetic energy, whereas Powell cleaning produces a dynamically dominant magnetic field. Such difference has been observed in adaptive-mesh refinement codes with CT and smoothed-particle hydrodynamics codes with divergence-cleaning. In the cosmological simulation, both approaches give similar magnetic amplification, but Powell exhibits more cell-level noise. CT methods in general are more accurate than divergence-cleaning techniques, and, when coupled to a moving mesh can exploit the advantages of automatic spatial/temporal adaptivity and reduced advection errors, allowing for improved astrophysical MHD simulations.
Adaptive finite element strategies for shell structures
NASA Technical Reports Server (NTRS)
Stanley, G.; Levit, I.; Stehlin, B.; Hurlbut, B.
1992-01-01
The present paper extends existing finite element adaptive refinement (AR) techniques to shell structures, which have heretofore been neglected in the AR literature. Specific challenges in applying AR to shell structures include: (1) physical discontinuities (e.g., stiffener intersections); (2) boundary layers; (3) sensitivity to geometric imperfections; (4) the sensitivity of most shell elements to mesh distortion, constraint definition and/or thinness; and (5) intrinsic geometric nonlinearity. All of these challenges but (5) are addressed here.
He, Xiaowei; Hou, Yanbin; Chen, Duofang; Jiang, Yuchuan; Shen, Man; Liu, Junting; Zhang, Qitan; Tian, Jie
2011-01-01
Bioluminescence tomography (BLT) is a promising tool for studying physiological and pathological processes at cellular and molecular levels. In most clinical or preclinical practices, fine discretization is needed for recovering sources with acceptable resolution when solving BLT with finite element method (FEM). Nevertheless, uniformly fine meshes would cause large dataset and overfine meshes might aggravate the ill-posedness of BLT. Additionally, accurately quantitative information of density and power has not been simultaneously obtained so far. In this paper, we present a novel multilevel sparse reconstruction method based on adaptive FEM framework. In this method, permissible source region gradually reduces with adaptive local mesh refinement. By using sparse reconstruction with l(1) regularization on multilevel adaptive meshes, simultaneous recovery of density and power as well as accurate source location can be achieved. Experimental results for heterogeneous phantom and mouse atlas model demonstrate its effectiveness and potentiality in the application of quantitative BLT.
Some aspects of adaptive grid technology related to boundary and interior layers
NASA Astrophysics Data System (ADS)
Carey, Graham F.; Anderson, M.; Carnes, B.; Kirk, B.
2004-04-01
We consider the use of adaptive mesh strategies for solution of problems exhibiting boundary and interior layer solutions. As the presence of these layer structures suggests, reliable and accurate solution of this class of problems using finite difference, finite volume or finite element schemes requires grading the mesh into the layers and due attention to the associated algorithms. When the nature and structure of the layer is known, mesh grading can be achieved during the grid generation by specifying an appropriate grading function. However, in many applications the location and nature of the layer behavior is not known in advance. Consequently, adaptive mesh techniques that employ feedback from intermediate grid solutions are an appealing approach. In this paper, we provide a brief overview of the main adaptive grid strategies in the context of problems with layers. Associated error indicators that guide the refinement feedback control/grid optimization process are also covered and there is a brief commentary on the supporting data structure requirements. Some current issues concerning the use of stabilization in conjunction with adaptive mesh refinement (AMR), the question of "pollution effects" in computation of local error indicators, the influence of nonlinearities and the design of meshes for targeted optimization of specific quantities are considered. The application of AMR for layer problems is illustrated by means of case studies from semiconductor device transport (drift diffusion), nonlinear reaction-diffusion, layers due to surface capillary effects, and shockwaves in compressible gas dynamics.
An adaptive learning approach for 3-D surface reconstruction from point clouds.
Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia
2008-06-01
In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.
MHD simulations on an unstructured mesh
Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Longcope, D.W.; Sugiyama, L.E.
1998-12-31
Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.
Adaptive kinetic-fluid solvers for heterogeneous computing architectures
NASA Astrophysics Data System (ADS)
Zabelok, Sergey; Arslanbekov, Robert; Kolobov, Vladimir
2015-12-01
We show feasibility and benefits of porting an adaptive multi-scale kinetic-fluid code to CPU-GPU systems. Challenges are due to the irregular data access for adaptive Cartesian mesh, vast difference of computational cost between kinetic and fluid cells, and desire to evenly load all CPUs and GPUs during grid adaptation and algorithm refinement. Our Unified Flow Solver (UFS) combines Adaptive Mesh Refinement (AMR) with automatic cell-by-cell selection of kinetic or fluid solvers based on continuum breakdown criteria. Using GPUs enables hybrid simulations of mixed rarefied-continuum flows with a million of Boltzmann cells each having a 24 × 24 × 24 velocity mesh. We describe the implementation of CUDA kernels for three modules in UFS: the direct Boltzmann solver using the discrete velocity method (DVM), the Direct Simulation Monte Carlo (DSMC) solver, and a mesoscopic solver based on the Lattice Boltzmann Method (LBM), all using adaptive Cartesian mesh. Double digit speedups on single GPU and good scaling for multi-GPUs have been demonstrated.
An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations
NASA Astrophysics Data System (ADS)
Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.
2016-08-01
In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.
A node-centered local refinement algorithm for poisson's equation in complex geometries
McCorquodale, Peter; Colella, Phillip; Grote, David P.; Vay, Jean-Luc
2004-05-04
This paper presents a method for solving Poisson's equation with Dirichlet boundary conditions on an irregular bounded three-dimensional region. The method uses a nodal-point discretization and adaptive mesh refinement (AMR) on Cartesian grids, and the AMR multigrid solver of Almgren. The discrete Laplacian operator at internal boundaries comes from either linear or quadratic (Shortley-Weller) extrapolation, and the two methods are compared. It is shown that either way, solution error is second order in the mesh spacing. Error in the gradient of the solution is first order with linear extrapolation, but second order with Shortley-Weller. Examples are given with comparison with the exact solution. The method is also applied to a heavy-ion fusion accelerator problem, showing the advantage of adaptivity.
Mesh Quality Improvement Toolkit
2002-11-15
MESQUITE is a linkable software library to be used by simulation and mesh generation tools to improve the quality of meshes. Mesh quality is improved by node movement and/or local topological modifications. Various aspects of mesh quality such as smoothness, element shape, size, and orientation are controlled by choosing the appropriate mesh qualtiy metric, and objective function tempate, and a numerical optimization solver to optimize the quality of meshes, MESQUITE uses the TSTT mesh interfacemore » specification to provide an interoperable toolkit that can be used by applications which adopt the standard. A flexible code design makes it easy for meshing researchers to add additional mesh quality metrics, templates, and solvers to develop new quality improvement algorithms by making use of the MESQUITE infrastructure.« less
KNUPP,PATRICK
2000-12-13
We investigate a well-motivated mesh untangling objective function whose optimization automatically produces non-inverted elements when possible. Examples show the procedure is highly effective on simplicial meshes and on non-simplicial (e.g., hexahedral) meshes constructed via mapping or sweeping algorithms. The current whisker-weaving (WW) algorithm in CUBIT usually produces hexahedral meshes that are unsuitable for analyses due to inverted elements. The majority of these meshes cannot be untangled using the new objective function. The most likely source of the difficulty is poor mesh topology.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.