Science.gov

Sample records for 3d adaptive mesh

  1. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  2. Adaptive mesh refinement techniques for 3-D skin electrode modeling.

    PubMed

    Sawicki, Bartosz; Okoniewski, Michal

    2010-03-01

    In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process.

  3. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  4. 3D Compressible Melt Transport with Mesh Adaptivity

    NASA Astrophysics Data System (ADS)

    Dannberg, J.; Heister, T.

    2015-12-01

    Melt generation and migration have been the subject of numerous investigations. However, their typical time and length scales are vastly different from mantle convection, and the material properties are highly spatially variable and make the problem strongly non-linear. These challenges make it difficult to study these processes in a unified framework and in three dimensions. We present our extension of the mantle convection code ASPECT that allows for solving additional equations describing the behavior of melt percolating through and interacting with a viscously deforming host rock. One particular advantage is ASPECT's adaptive mesh refinement, as the resolution can be increased in areas where melt is present and viscosity gradients are steep, whereas a lower resolution is sufficient in regions without melt. Our approach includes both melt migration and melt generation, allowing for different melting parametrizations. In contrast to previous formulations, we consider the individual compressibilities of the solid and fluid phases in addition to compaction flow. This ensures self-consistency when linking melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We evaluate the functionality and potential of this method using a series of benchmarks and applications, including solitary waves, magmatic shear bands and melt generation and transport in a rising mantle plume. We compare results of the compressible and incompressible formulation and find melt volume differences of up to 15%. Moreover, we demonstrate that adaptive mesh refinement has the potential to reduce the runtime of a computation by more than one order of magnitude. Our model of magma dynamics provides a framework for investigating links between the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modeling the generation of komatiites or other melts originating in greater depths.

  5. 3D Compressible Melt Transport with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo

    2015-04-01

    Melt generation and migration have been the subject of numerous investigations, but their typical time and length-scales are vastly different from mantle convection, which makes it difficult to study these processes in a unified framework. The equations that describe coupled Stokes-Darcy flow have been derived a long time ago and they have been successfully implemented and applied in numerical models (Keller et al., 2013). However, modelling magma dynamics poses the challenge of highly non-linear and spatially variable material properties, in particular the viscosity. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in mesh cells where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. In addition, previous models neglect the compressibility of both the solid and the fluid phase. However, experiments have shown that the melt density change from the depth of melt generation to the surface leads to a volume increase of up to 20%. Considering these volume changes in both phases also ensures self-consistency of models that strive to link melt generation to processes in the deeper mantle, where the compressibility of the solid phase becomes more important. We describe our extension of the finite-element mantle convection code ASPECT (Kronbichler et al., 2012) that allows for solving additional equations describing the behaviour of silicate melt percolating through and interacting with a viscously deforming host rock. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. This approach includes both melt migration and melt generation with the accompanying latent heat effects. We evaluate the functionality and potential of this method using a series of simple model setups and benchmarks, comparing results of the compressible and incompressible formulation and

  6. Content-Adaptive Finite Element Mesh Generation of 3-D Complex MR Volumes for Bioelectromagnetic Problems.

    PubMed

    Lee, W; Kim, T-S; Cho, M; Lee, S

    2005-01-01

    In studying bioelectromagnetic problems, finite element method offers several advantages over other conventional methods such as boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropy. Mesh generation is the first requirement in the finite element analysis and there are many different approaches in mesh generation. However conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes, resulting in numerous elements in the smaller volume regions, thereby increasing computational load and demand. In this work, we present an improved content-adaptive mesh generation scheme that is efficient and fast along with options to change the contents of meshes. For demonstration, mesh models of the head from a volume MRI are presented in 2-D and 3-D.

  7. Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Pekar, Vladimir; Kaus, Michael R.; Lorenz, Cristian; Lobregt, Steven; Truyen, Roel; Weese, Juergen

    2001-07-01

    Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.

  8. Dynamic Implicit 3D Adaptive Mesh Refinement for Non-Equilibrium Radiation Diffusion

    SciTech Connect

    Philip, Bobby; Wang, Zhen; Berrill, Mark A; Rodriguez Rodriguez, Manuel; Pernice, Michael

    2014-01-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multiphysics systems: implicit time integration for efficient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent linear solver convergence.

  9. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    NASA Astrophysics Data System (ADS)

    Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan

    2016-09-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.

  10. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    SciTech Connect

    B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  11. A 3-D adaptive mesh refinement algorithm for multimaterial gas dynamics

    SciTech Connect

    Puckett, E.G. ); Saltzman, J.S. )

    1991-08-12

    Adaptive Mesh Refinement (AMR) in conjunction with high order upwind finite difference methods has been used effectively on a variety of problems. In this paper we discuss an implementation of an AMR finite difference method that solves the equations of gas dynamics with two material species in three dimensions. An equation for the evolution of volume fractions augments the gas dynamics system. The material interface is preserved and tracked from the volume fractions using a piecewise linear reconstruction technique. 14 refs., 4 figs.

  12. On solving the 3-D phase field equations by employing a parallel-adaptive mesh refinement (Para-AMR) algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Z.; Xiong, S. M.

    2015-05-01

    An algorithm comprising adaptive mesh refinement (AMR) and parallel (Para-) computing capabilities was developed to efficiently solve the coupled phase field equations in 3-D. The AMR was achieved based on a gradient criterion and the point clustering algorithm introduced by Berger (1991). To reduce the time for mesh generation, a dynamic regridding approach was developed based on the magnitude of the maximum phase advancing velocity. Local data at each computing process was then constructed and parallel computation was realized based on the hierarchical grid structure created during the AMR. Numerical tests and simulations on single and multi-dendrite growth were performed and results show that the proposed algorithm could shorten the computing time for 3-D phase field simulation for about two orders of magnitude and enable one to gain much more insight in understanding the underlying physics during dendrite growth in solidification.

  13. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    SciTech Connect

    R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin

    2003-10-20

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.

  14. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  15. Spatial watermarking of 3D triangle meshes

    NASA Astrophysics Data System (ADS)

    Cayre, Francois; Macq, Benoit M. M.

    2001-12-01

    Although it is obvious that watermarking has become of great interest in protecting audio, videos, and still pictures, few work has been done considering 3D meshes. We propose a new method for watermarking 3D triangle meshes. This method embeds the watermark as triangles deformations. The list of watermarked triangles is obtained through a similar way to the one used in the TSPS (Triangle Strip Peeling Sequence) method. Unlike TSPS, our method is automatic and more secure. We also show that it is reversible.

  16. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  17. Invertible authentication for 3D meshes

    NASA Astrophysics Data System (ADS)

    Dittmann, Jana; Benedens, Oliver

    2003-06-01

    Digital watermarking has become an accepted technology for enabling multimedia protection schemes. Based on the introduced media independent protocol schemes for invertible data authentication in references 2, 4 and 5 we discuss the design of a new 3D invertible labeling technique to ensure and require high data integrity. We combine digital signature schemes and digital watermarking to provide a public verifiable integrity. Furthermore the protocol steps in the other papers to ensure that the original data can only be reproduced with a secret key is adopted for 3D meshes. The goal is to show how the existing protocol can be used for 3D meshes to provide solutions for authentication watermarking. In our design concept and evaluation we see that due to the nature of 3D meshes the invertible function are different from the image and audio concepts to achieve invertibility to guaranty reversibility of the original. Therefore we introduce a concept for distortion free invertibility and a concept for adjustable minimum distortion invertibility.

  18. 3-D Mesh Generation Nonlinear Systems

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surfacemore » equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.« less

  19. Cubit Adaptive Meshing Algorithm Library

    2004-09-01

    CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMAL’s triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandia’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less

  20. The 3-D unstructured mesh generation using local transformations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1993-01-01

    The topics are presented in viewgraph form and include the following: 3D combinatorial edge swapping; 3D incremental triangulation via local transformations; a new approach to multigrid for unstructured meshes; surface mesh generation using local transforms; volume triangulations; viscous mesh generation; and future directions.

  1. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  2. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  3. 3-D UNSTRUCTURED HEXAHEDRAL-MESH Sn TRANSPORT METHODS

    SciTech Connect

    J. MOREL; J. MCGHEE; ET AL

    2000-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We have developed a method for solving the neutral-particle transport equation on 3-D unstructured hexahedral meshes using a S{sub n} discretization in angle in conjunction with a discontinuous finite-element discretization in space and a multigroup discretization in energy. Previous methods for solving this equation in 3-D have been limited to rectangular meshes. The unstructured-mesh method that we have developed is far more efficient for solving problems with complex 3-D geometric features than rectangular-mesh methods. In spite of having to make several compromises in our spatial discretization technique and our iterative solution technique, our method has been found to be both accurate and efficient for a broad class of problems.

  4. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    -based error estimates. We conclude that the quasi-optimal mesh must be quasi-uniform in this metric. All numerical experiments are based on the publicly available Ani3D package, the collection of advanced numerical instruments.

  5. Requirements for mesh resolution in 3D computational hemodynamics.

    PubMed

    Prakash, S; Ethier, C R

    2001-04-01

    Computational techniques are widely used for studying large artery hemodynamics. Current trends favor analyzing flow in more anatomically realistic arteries. A significant obstacle to such analyses is generation of computational meshes that accurately resolve both the complex geometry and the physiologically relevant flow features. Here we examine, for a single arterial geometry, how velocity and wall shear stress patterns depend on mesh characteristics. A well-validated Navier-Stokes solver was used to simulate flow in an anatomically realistic human right coronary artery (RCA) using unstructured high-order tetrahedral finite element meshes. Velocities, wall shear stresses (WSS), and wall shear stress gradients were computed on a conventional "high-resolution" mesh series (60,000 to 160,000 velocity nodes) generated with a commercial meshing package. Similar calculations were then performed in a series of meshes generated through an adaptive mesh refinement (AMR) methodology. Mesh-independent velocity fields were not very difficult to obtain for both the conventional and adaptive mesh series. However, wall shear stress fields, and, in particular, wall shear stress gradient fields, were much more difficult to accurately resolve. The conventional (nonadaptive) mesh series did not show a consistent trend towards mesh-independence of WSS results. For the adaptive series, it required approximately 190,000 velocity nodes to reach an r.m.s. error in normalized WSS of less than 10 percent. Achieving mesh-independence in computed WSS fields requires a surprisingly large number of nodes, and is best approached through a systematic solution-adaptive mesh refinement technique. Calculations of WSS, and particularly WSS gradients, show appreciable errors even on meshes that appear to produce mesh-independent velocity fields.

  6. Unstructured mesh generation and adaptivity

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  7. User-driven 3D mesh region targeting

    NASA Astrophysics Data System (ADS)

    Karasev, Peter; Malcolm, James; Niethammer, Marc; Kikinis, Ron; Tannenbaum, Allen

    2010-02-01

    We present a method for the fast selection of a region on a 3D mesh using geometric information. This is done using a weighted arc length minimization with a conformal factor based on the mean curvature of the 3D surface. A careful analysis of the geometric estimation process enables our geometric curve shortening to use a reliable smooth estimate of curvature and its gradient. The result is a robust way for a user to easily interact with particular regions of a 3D mesh construced from medical imaging. In this study, we focus on building a robust and semi-automatic method for extracting selected folds on the cortical surface, specifically for isolating gyri by drawing a curve along the surrounding sulci. It is desirable to make this process semi-automatic because manually drawing a curve through the complex 3D mesh is extremely tedious, while automatic methods cannot realistically be expected to select the exact closed contour a user desires for a given dataset. In the technique described here, a user places a handful of seed points surrounding the gyri of interest; an initial curve is made from these points which then evolves to capture the region. We refer to this user-driven procedure as targeting or selection interchangeably. To illustrate the applicability of these methods to other medical data, we also give an example of bone fracture CT surface parcellation.

  8. Computationally efficient solution to the Cahn-Hilliard equation: Adaptive implicit time schemes, mesh sensitivity analysis and the 3D isoperimetric problem

    NASA Astrophysics Data System (ADS)

    Wodo, Olga; Ganapathysubramanian, Baskar

    2011-07-01

    We present an efficient numerical framework for analyzing spinodal decomposition described by the Cahn-Hilliard equation. We focus on the analysis of various implicit time schemes for two and three dimensional problems. We demonstrate that significant computational gains can be obtained by applying embedded, higher order Runge-Kutta methods in a time adaptive setting. This allows accessing time-scales that vary by five orders of magnitude. In addition, we also formulate a set of test problems that isolate each of the sub-processes involved in spinodal decomposition: interface creation and bulky phase coarsening. We analyze the error fluctuations using these test problems on the split form of the Cahn-Hilliard equation solved using the finite element method with basis functions of different orders. Any scheme that ensures at least four elements per interface satisfactorily captures both sub-processes. Our findings show that linear basis functions have superior error-to-cost properties. This strategy - coupled with a domain decomposition based parallel implementation - let us notably augment the efficiency of a numerical Cahn-Hillard solver, and open new venues for its practical applications, especially when three dimensional problems are considered. We use this framework to address the isoperimetric problem of identifying local solutions in the periodic cube in three dimensions. The framework is able to generate all five hypothesized candidates for the local solution of periodic isoperimetric problem in 3D - sphere, cylinder, lamella, doubly periodic surface with genus two (Lawson surface) and triply periodic minimal surface (P Schwarz surface).

  9. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    SciTech Connect

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  10. 3D unstructured mesh discontinuous finite element hydro

    SciTech Connect

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    1995-07-01

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scale projects such as ICF3D.

  11. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  12. A Mechanistic Study of Wetting Superhydrophobic Porous 3D Meshes.

    PubMed

    Yohe, Stefan T; Freedman, Jonathan D; Falde, Eric J; Colson, Yolonda L; Grinstaff, Mark W

    2013-08-01

    Superhydrophobic, porous, 3D materials composed of poly( ε -caprolactone) (PCL) and the hydrophobic polymer dopant poly(glycerol monostearate- co - ε -caprolactone) (PGC-C18) are fabricated using the electrospinning technique. These 3D materials are distinct from 2D superhydrophobic surfaces, with maintenance of air at the surface as well as within the bulk of the material. These superhydrophobic materials float in water, and when held underwater and pressed, an air bubble is released and will rise to the surface. By changing the PGC-C18 doping concentration in the meshes and/or the fiber size from the micro- to nanoscale, the long-term stability of the entrapped air layer is controlled. The rate of water infiltration into the meshes, and the resulting displacement of the entrapped air, is quantitatively measured using X-ray computed tomography. The properties of the meshes are further probed using surfactants and solvents of different surface tensions. Finally, the application of hydraulic pressure is used to quantify the breakthrough pressure to wet the meshes. The tools for fabrication and analysis of these superhydrophobic materials as well as the ability to control the robustness of the entrapped air layer are highly desirable for a number of existing and emerging applications. PMID:25309305

  13. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  14. Computational MHD on 3D Unstructured Lagrangian Meshes

    NASA Astrophysics Data System (ADS)

    Rousculp, C. L.; Barnes, D. C.

    1999-11-01

    Lagrangian computational meshes are typically employed to model multi-material problems because they do not require costly interface tracking methods. Our algorithms, for ideal and non-ideal 3D MHD, are designed for use on such meshes composed of polyhedral cells with an arbitrary number of faces. This allows for mesh refinement during a calculation to prevent the well known problem of mesh tangling. The action of the magnetic vector potential, A \\cdot δ l, is centered on edges. For ideal and non-ideal flow, this maintains nabla \\cdot B = 0 to round-off error. Vertex forces are derived by the variation of magnetic energy with respect to vertex positions, F = - partial WB / partial r. This assures symmetry as well as magnetic flux, momentum, and energy conservation. The method is local so that parallelization by domain decomposition is natural for large meshes. The resistive diffusion part is calculated using the support operator method, to obtain energy conservation, symmetry. Implicit time difference equations are solved by preconditioned, conjugate gradient methods. Results of convergence tests are presented. Boundary conditions at plasma vaccuum interfaces have been incorporated. Initial results of an annular Z-pinch implosion problem are shown.

  15. Conservative Patch Algorithm and Mesh Sequencing for PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Abdol-Hamid, K. S.

    2005-01-01

    A mesh-sequencing algorithm and a conservative patched-grid-interface algorithm (hereafter Patch Algorithm ) have been incorporated into the PAB3D code, which is a computer program that solves the Navier-Stokes equations for the simulation of subsonic, transonic, or supersonic flows surrounding an aircraft or other complex aerodynamic shapes. These algorithms are efficient, flexible, and have added tremendously to the capabilities of PAB3D. The mesh-sequencing algorithm makes it possible to perform preliminary computations using only a fraction of the grid cells (provided the original cell count is divisible by an integer) along any grid coordinate axis, independently of the other axes. The patch algorithm addresses another critical need in multi-block grid situation where the cell faces of adjacent grid blocks may not coincide, leading to errors in calculating fluxes of conserved physical quantities across interfaces between the blocks. The patch algorithm, based on the Stokes integral formulation of the applicable conservation laws, effectively matches each of the interfacial cells on one side of the block interface to the corresponding fractional cell area pieces on the other side. This approach is comprehensive and unified such that all interface topology is automatically processed without user intervention. This algorithm is implemented in a preprocessing code that creates a cell-by-cell database that will maintain flux conservation at any level of full or reduced grid density as the user may choose by way of the mesh-sequencing algorithm. These two algorithms have enhanced the numerical accuracy of the code, reduced the time and effort for grid preprocessing, and provided users with the flexibility of performing computations at any desired full or reduced grid resolution to suit their specific computational requirements.

  16. A methodology to mesh mesoscopic representative volume element of 3D interlock woven composites impregnated with resin

    NASA Astrophysics Data System (ADS)

    Ha, Manh Hung; Cauvin, Ludovic; Rassineux, Alain

    2016-04-01

    We present a new numerical methodology to build a Representative Volume Element (RVE) of a wide range of 3D woven composites in order to determine the mechanical behavior of the fabric unit cell by a mesoscopic approach based on a 3D finite element analysis. Emphasis is put on the numerous difficulties of creating a mesh of these highly complex weaves embedded in a resin. A conforming mesh at the numerous interfaces between yarns is created by a multi-quadtree adaptation technique, which makes it possible thereafter to build an unstructured 3D mesh of the resin with tetrahedral elements. The technique is not linked with any specific tool, but can be carried out with the use of any 2D and 3D robust mesh generators.

  17. 3D meshes of carbon nanotubes guide functional reconnection of segregated spinal explants.

    PubMed

    Usmani, Sadaf; Aurand, Emily Rose; Medelin, Manuela; Fabbro, Alessandra; Scaini, Denis; Laishram, Jummi; Rosselli, Federica B; Ansuini, Alessio; Zoccolan, Davide; Scarselli, Manuela; De Crescenzi, Maurizio; Bosi, Susanna; Prato, Maurizio; Ballerini, Laura

    2016-07-01

    In modern neuroscience, significant progress in developing structural scaffolds integrated with the brain is provided by the increasing use of nanomaterials. We show that a multiwalled carbon nanotube self-standing framework, consisting of a three-dimensional (3D) mesh of interconnected, conductive, pure carbon nanotubes, can guide the formation of neural webs in vitro where the spontaneous regrowth of neurite bundles is molded into a dense random net. This morphology of the fiber regrowth shaped by the 3D structure supports the successful reconnection of segregated spinal cord segments. We further observed in vivo the adaptability of these 3D devices in a healthy physiological environment. Our study shows that 3D artificial scaffolds may drive local rewiring in vitro and hold great potential for the development of future in vivo interfaces. PMID:27453939

  18. 3D meshes of carbon nanotubes guide functional reconnection of segregated spinal explants

    PubMed Central

    Usmani, Sadaf; Aurand, Emily Rose; Medelin, Manuela; Fabbro, Alessandra; Scaini, Denis; Laishram, Jummi; Rosselli, Federica B.; Ansuini, Alessio; Zoccolan, Davide; Scarselli, Manuela; De Crescenzi, Maurizio; Bosi, Susanna; Prato, Maurizio; Ballerini, Laura

    2016-01-01

    In modern neuroscience, significant progress in developing structural scaffolds integrated with the brain is provided by the increasing use of nanomaterials. We show that a multiwalled carbon nanotube self-standing framework, consisting of a three-dimensional (3D) mesh of interconnected, conductive, pure carbon nanotubes, can guide the formation of neural webs in vitro where the spontaneous regrowth of neurite bundles is molded into a dense random net. This morphology of the fiber regrowth shaped by the 3D structure supports the successful reconnection of segregated spinal cord segments. We further observed in vivo the adaptability of these 3D devices in a healthy physiological environment. Our study shows that 3D artificial scaffolds may drive local rewiring in vitro and hold great potential for the development of future in vivo interfaces. PMID:27453939

  19. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  20. Adaptive and Unstructured Mesh Cleaving

    PubMed Central

    Bronson, Jonathan R.; Sastry, Shankar P.; Levine, Joshua A.; Whitaker, Ross T.

    2015-01-01

    We propose a new strategy for boundary conforming meshing that decouples the problem of building tetrahedra of proper size and shape from the problem of conforming to complex, non-manifold boundaries. This approach is motivated by the observation that while several methods exist for adaptive tetrahedral meshing, they typically have difficulty at geometric boundaries. The proposed strategy avoids this conflict by extracting the boundary conforming constraint into a secondary step. We first build a background mesh having a desired set of tetrahedral properties, and then use a generalized stenciling method to divide, or “cleave”, these elements to get a set of conforming tetrahedra, while limiting the impacts cleaving has on element quality. In developing this new framework, we make several technical contributions including a new method for building graded tetrahedral meshes as well as a generalization of the isosurface stuffing and lattice cleaving algorithms to unstructured background meshes. PMID:26137171

  1. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  2. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  3. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  4. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  5. Adaptive interrogation for 3D-PIV

    NASA Astrophysics Data System (ADS)

    Novara, Matteo; Ianiro, Andrea; Scarano, Fulvio

    2013-02-01

    A method to adapt the shape and orientation of interrogation volumes for 3D-PIV motion analysis is introduced, aimed to increase the local spatial resolution. The main application of this approach is the detailed analysis of complex 3D and vortex-dominated flows that exhibit high vorticity in confined regions like shear layers and vortex filaments. The adaptive criterion is based on the analysis of the components of the local velocity gradient tensor, which returns the level of anisotropy of velocity spatial fluctuations. The principle to increase the local spatial resolution is based on the deformation of spherical isotropic interrogation regions, obtained by means of Gaussian weighting, into ellipsoids, with free choice of the principal axes and their directions. The interrogation region is contracted in the direction of the maximum velocity variation and elongated in the minimum one in order to maintain a constant interrogation volume. The adaptivity technique for three-dimensional PIV data takes advantage of the 3D topology of the flow, allowing increasing the spatial resolution not only in the case of shear layers, but also for vortex filaments, which is not possible for two-dimensional measurement in the plane normal to the vortex axis. The definition of the ellipsoidal interrogation region semi-axes is based on the singular values and singular directions of the local velocity gradient tensor as obtained by the singular values decomposition technique (SVD). The working principle is verified making use of numerical simulations of a shear layer and of a vortex filament. The application of the technique to data from a Tomo-PIV experiment conducted on a round jet, shows that the resolution of the shear layer at the jet exit can be considerably improved and an increase of about 25% in the vorticity peak is attained when the adaptive approach is applied. On the other hand, the peak vorticity description in the core of vortex rings is only slightly improved with

  6. Issues in adaptive mesh refinement

    SciTech Connect

    Dai, William Wenlong

    2009-01-01

    In this paper, we present an approach for a patch-based adaptive mesh refinement (AMR) for multi-physics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, and management of patches. Among the special features of this patch-based AMR are symmetry preserving, efficiency of refinement, special implementation offlux correction, and patch management in parallel computing environments. Here, higher efficiency of refinement means less unnecessarily refined cells for a given set of cells to be refined. To demonstrate the capability of the AMR framework, hydrodynamics simulations with many levels of refinement are shown in both two- and three-dimensions.

  7. Shape design sensitivities using fully automatic 3-D mesh generation

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  8. Feature edge extraction from 3D triangular meshes using a thinning algorithm

    NASA Astrophysics Data System (ADS)

    Nomura, Masaru; Hamada, Nozomu

    2001-11-01

    Highly detailed geometric models, which are represented as dense triangular meshes are becoming popular in computer graphics. Since such 3D meshes often have huge information, we require some methods to treat them efficiently in the 3D mesh processing such as, surface simplification, subdivision surface, curved surface approximation and morphing. In these applications, we often extract features of 3D meshes such as feature vertices and feature edges in preprocessing step. An automatic extraction method of feature edges is treated in this study. In order to realize the feature edge extraction method, we first introduce the concavity and convexity evaluation value. Then the histogram of the concavity and convexity evaluation value is used to separate the feature edge region. We apply a thinning algorithm, which is used in 2D binary image processing. It is shown that the proposed method can extract appropriate feature edges from 3D meshes.

  9. An efficient topology adaptation system for parametric active contour segmentation of 3D images

    NASA Astrophysics Data System (ADS)

    Abhau, Jochen; Scherzer, Otmar

    2008-03-01

    Active contour models have already been used succesfully for segmentation of organs from medical images in 3D. In implicit models, the contour is given as the isosurface of a scalar function, and therefore topology adaptations are handled naturally during a contour evolution. Nevertheless, explicit or parametric models are often preferred since user interaction and special geometric constraints are usually easier to incorporate. Although many researchers have studied topology adaptation algorithms in explicit mesh evolutions, no stable algorithm is known for interactive applications. In this paper, we present a topology adaptation system, which consists of two novel ingredients: A spatial hashing technique is used to detect self-colliding triangles of the mesh whose expected running time is linear with respect to the number of mesh vertices. For the topology change procedure, we have developed formulas by homology theory. During a contour evolution, we just have to choose between a few possible mesh retriangulations by local triangle-triangle intersection tests. Our algorithm has several advantages compared to existing ones: Since the new algorithm does not require any global mesh reparametrizations, it is very efficient. Since the topology adaptation system does not require constant sampling density of the mesh vertices nor especially smooth meshes, mesh evolution steps can be performed in a stable way with a rather coarse mesh. We apply our algorithm to 3D ultrasonic data, showing that accurate segmentation is obtained in some seconds.

  10. 3D unstructured-mesh radiation transport codes

    SciTech Connect

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.

  11. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  12. Joint synchronization and high capacity data hiding for 3D meshes

    NASA Astrophysics Data System (ADS)

    Itier, Vincent; Puech, William; Gesquière, Gilles; Pedeboy, Jean-Pierre

    2015-03-01

    Three-dimensional (3-D) meshes are already profusely used in lot of domains. In this paper, we propose a new high capacity data hiding scheme for vertex cloud. Our approach is based on very small displacements of vertices, that produce very low distortion of the mesh. Moreover this method can embed three bits per vertex relying only on the geometry of the mesh. As an application, we show how we embed a large binary logo for copyright purpose.

  13. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  14. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  15. Adaptive Hybrid Mesh Refinement for Multiphysics Applications

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F

    2007-01-01

    The accuracy and convergence of computational solutions of mesh-based methods is strongly dependent on the quality of the mesh used. We have developed methods for optimizing meshes that are comprised of elements of arbitrary polygonal and polyhedral type. We present in this research the development of r-h hybrid adaptive meshing technology tailored to application areas relevant to multi-physics modeling and simulation. Solution-based adaptation methods are used to reposition mesh nodes (r-adaptation) or to refine the mesh cells (h-adaptation) to minimize solution error. The numerical methods perform either the r-adaptive mesh optimization or the h-adaptive mesh refinement method on the initial isotropic or anisotropic meshes to maximize the equidistribution of a weighted geometric and/or solution function. We have successfully introduced r-h adaptivity to a least-squares method with spherical harmonics basis functions for the solution of the spherical shallow atmosphere model used in climate forecasting. In addition, application of this technology also covers a wide range of disciplines in computational sciences, most notably, time-dependent multi-physics, multi-scale modeling and simulation.

  16. Parallel tetrahedral mesh adaptation with dynamic load balancing

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    2000-06-28

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  17. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  18. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  19. Novel irregular mesh tagging algorithm for wound synthesis on a 3D face.

    PubMed

    Lee, Sangyong; Chin, Seongah

    2015-01-01

    Recently, advanced visualizing techniques in computer graphics have considerably enhanced the visual appearance of synthetic models. To realize enhanced visual graphics for synthetic medical effects, the first step followed by rendering techniques involves attaching albedo textures to the region where a certain graphic is to be rendered. For instance, in order to render wound textures efficiently, the first step is to recognize the area where the user wants to attach a wound. However, in general, face indices are not stored in sequential order, which makes sub-texturing difficult. In this paper, we present a novel mesh tagging algorithm that utilizes a task for mesh traversals and level extension in the general case of a wound sub-texture mapping and a selected region deformation in a three-dimensional (3D) model. This method works automatically on both regular and irregular mesh surfaces. The approach consists of mesh selection (MS), mesh leveling (ML), and mesh tagging (MT). To validate our approach, we performed experiments for synthesizing wounds on a 3D face model and on a simulated mesh. PMID:26405904

  20. Parallel Adaptive Computation of Blood Flow in a 3D ``Whole'' Body Model

    NASA Astrophysics Data System (ADS)

    Zhou, M.; Figueroa, C. A.; Taylor, C. A.; Sahni, O.; Jansen, K. E.

    2008-11-01

    Accurate numerical simulations of vascular trauma require the consideration of a larger portion of the vasculature than previously considered, due to the systemic nature of the human body's response. A patient-specific 3D model composed of 78 connected arterial branches extending from the neck to the lower legs is constructed to effectively represent the entire body. Recently developed outflow boundary conditions that appropriately represent the downstream vasculature bed which is not included in the 3D computational domain are applied at 78 outlets. In this work, the pulsatile blood flow simulations are started on a fairly uniform, unstructured mesh that is subsequently adapted using a solution-based approach to efficiently resolve the flow features. The adapted mesh contains non-uniform, anisotropic elements resulting in resolution that conforms with the physical length scales present in the problem. The effects of the mesh resolution on the flow field are studied, specifically on relevant quantities of pressure, velocity and wall shear stress.

  1. An Automatic 3D Mesh Generation Method for Domains with Multiple Materials.

    PubMed

    Zhang, Yongjie; Hughes, Thomas J R; Bajaj, Chandrajit L

    2010-01-01

    This paper describes an automatic and efficient approach to construct unstructured tetrahedral and hexahedral meshes for a composite domain made up of heterogeneous materials. The boundaries of these material regions form non-manifold surfaces. In earlier papers, we developed an octree-based isocontouring method to construct unstructured 3D meshes for a single-material (homogeneous) domain with manifold boundary. In this paper, we introduce the notion of a material change edge and use it to identify the interface between two or several different materials. A novel method to calculate the minimizer point for a cell shared by more than two materials is provided, which forms a non-manifold node on the boundary. We then mesh all the material regions simultaneously and automatically while conforming to their boundaries directly from volumetric data. Both material change edges and interior edges are analyzed to construct tetrahedral meshes, and interior grid points are analyzed for proper hexahedral mesh construction. Finally, edge-contraction and smoothing methods are used to improve the quality of tetrahedral meshes, and a combination of pillowing, geometric flow and optimization techniques is used for hexahedral mesh quality improvement. The shrink set of pillowing schemes is defined automatically as the boundary of each material region. Several application results of our multi-material mesh generation method are also provided. PMID:20161555

  2. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  3. The parallipiped mesh and mesh change in the 3D TLM method

    NASA Astrophysics Data System (ADS)

    Saguet, P.

    1984-03-01

    The use of parallelipiped and variable-size meshes in the transmission-line-matrix (TLM) numerical-analysis technique for waveguide structures is explained, extending the 2D method of Saguet and Pic (1981) to three dimensions. The theory of the meshes, involving serial and parallel elementary nodes, is explored; the implementation is described; and the resonance frequencies of a rectangular cavity with fin-line structures (similar to that used by Hoefer and Ros, 1979) are computed. The results are presented in a table and compared to the theoretical values. The resonance frequency is obtained with accuracy 5.8 percent in 410 sec of CPU time, as compared with the 240 min needed by Hoefer and Ros to achieve 9.7-percent accuracy with the conventional TLM method.

  4. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  5. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen

    2008-01-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  6. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  7. Blind robust watermarking schemes for copyright protection of 3D mesh objects.

    PubMed

    Zafeiriou, Stefanos; Tefas, Anastasios; Pitas, Ioannis

    2005-01-01

    In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.

  8. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    SciTech Connect

    Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).

  9. Polymer-Based Mesh as Supports for Multi-layered 3D Cell Culture and Assays

    PubMed Central

    Simon, Karen A.; Park, Kyeng Min; Mosadegh, Bobak; Subramaniam, Anand Bala; Mazzeo, Aaron; Ngo, Phil M.; Whitesides, George M.

    2013-01-01

    Three-dimensional (3D) culture systems can mimic certain aspects of the cellular microenvironment found in vivo, but generation, analysis and imaging of current model systems for 3D cellular constructs and tissues remain challenging. This work demonstrates a 3D culture system – Cells-in-Gels-in-Mesh (CiGiM) – that uses stacked sheets of polymer-based mesh to support cells embedded in gels to form tissue-like constructs; the stacked sheets can be disassembled by peeling the sheets apart to analyze cultured cells—layer-by-layer—within the construct. The mesh sheets leave openings large enough for light to pass through with minimal scattering, and thus allowing multiple options for analysis—(i) using straightforward analysis by optical light microscopy, (ii) by high-resolution analysis with fluorescence microscopy, or (iii) with a fluorescence gel scanner. The sheets can be patterned into separate zones with paraffin film-based decals, in order to conduct multiple experiments in parallel; the paraffin-based decal films also block lateral diffusion of oxygen effectively. CiGiM simplifies the generation and analysis of 3D culture without compromising throughput, and quality of the data collected: it is especially useful in experiments that require control of oxygen levels, and isolation of adjacent wells in a multi-zone format. PMID:24095253

  10. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of partsmore » that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.« less

  11. RUNTHRU6.0. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    Janucik, F.X.; Ross, D.M.; Sischo, K.F.

    1997-01-01

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  12. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  13. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  14. Optimal fully adaptive wormhole routing for meshes

    SciTech Connect

    Schwiebert, L.; Jayasimha, D.N.

    1993-12-31

    A deadlock-free fully adaptive routing algorithm for 2D meshes which is optimal in the number of virtual channels required and in the number of restrictions placed on the use of these virtual channels is presented. The routing algorithm imposes less than half as many routing restrictions as any previous fully adaptive routing algorithm. It is also proved that, ignoring symmetry, this routing algorithm is the only fully adaptive routing algorithm that achieves both of these goals. The implementation of the routing algorithm requires relatively simple router control logic. The new algorithm is extended, in a straightforward manner to arbitrary dimension meshes. It needs only 4n-2 virtual channels, the minimum number for an n-dimensional mesh. All previous algorithms require an exponential number of virtual channels in the dimension of the mesh.

  15. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  16. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  17. An adaptive learning approach for 3-D surface reconstruction from point clouds.

    PubMed

    Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia

    2008-06-01

    In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.

  18. Metal-mesh based transparent electrode on a 3-D curved surface by electrohydrodynamic jet printing

    NASA Astrophysics Data System (ADS)

    Seong, Baekhoon; Yoo, Hyunwoong; Dat Nguyen, Vu; Jang, Yonghee; Ryu, Changkook; Byun, Doyoung

    2014-09-01

    Invisible Ag mesh transparent electrodes (TEs), with a width of 7 μm, were prepared on a curved glass surface by electrohydrodynamic (EHD) jet printing. With a 100 μm pitch, the EHD jet printed the Ag mesh on the convex glass which had a sheet resistance of 1.49 Ω/□. The printing speed was 30 cm s-1 using Ag ink, which had a 10 000 cPs viscosity and a 70 wt% Ag nanoparticle concentration. We further showed the performance of a 3-D transparent heater using the Ag mesh transparent electrode. The EHD jet printed an invisible Ag grid transparent electrode with good electrical and optical properties with promising applications on printed optoelectronic devices.

  19. Unstructured 3D Delaunay mesh generation applied to planes, trains and automobiles

    NASA Technical Reports Server (NTRS)

    Blake, Kenneth R.; Spragle, Gregory S.

    1993-01-01

    Technical issues associated with domain-tessellation production, including initial boundary node triangulation and volume mesh refinement, are presented for the 'TGrid' 3D Delaunay unstructured grid generation program. The approach employed is noted to be capable of preserving predefined triangular surface facets in the final tessellation. The capabilities of the approach are demonstrated by generating grids about an entire fighter aircraft configuration, a train, and a wind tunnel model of an automobile.

  20. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  1. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  2. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  3. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  4. Dubai 3d Textuerd Mesh Using High Quality Resolution Vertical/oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Tayeb Madani, Adib; Ziad Ahmad, Abdullateef; Christoph, Lueken; Hammadi, Zamzam; Manal Abdullah Sabeal, Manal Abdullah x.

    2016-06-01

    Providing high quality 3D data with reasonable quality and cost were always essential, affording the core data and foundation for developing an information-based decision-making tool of urban environments with the capability of providing decision makers, stakeholders, professionals, and public users with 3D views and 3D analysis tools of spatial information that enables real-world views. Helps and assist in improving users' orientation and also increase their efficiency in performing their tasks related to city planning, Inspection, infrastructures, roads, and cadastre management. In this paper, the capability of multi-view Vexcel UltraCam Osprey camera images is examined to provide a 3D model of building façades using an efficient image-based modeling workflow adopted by commercial software's. The main steps of this work include: Specification, point cloud generation, and 3D modeling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on the images to generate point cloud. Then, a mesh model of points is calculated using and refined to obtain an accurate model of buildings. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough LoD2 details of the building based on visual assessment. The objective of this paper is neither comparing nor promoting a specific technique over the other and does not mean to promote a sensor-based system over another systems or mechanism presented in existing or previous paper. The idea is to share experience.

  5. Adaptive mesh refinement for storm surge

    NASA Astrophysics Data System (ADS)

    Mandli, Kyle T.; Dawson, Clint N.

    2014-03-01

    An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the GEOCLAW framework and compared to ADCIRC for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.

  6. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being appliedmore » to slurry flow (landslides).« less

  7. TRANSL8GDECIM8. Data Translation and Filtering for Large 3D Triangle Mesh Models

    SciTech Connect

    Janucik, F.X.; Ross, D.M.

    1993-09-01

    The TRANSL8GDECIM8 system consists of two programs: TRANSL8G and DECIM8. The TRANSL8G program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model enhancement features include common vertex joining, consistent triangle vertex ordering, vertex normal vector averaging, and triangle strip generation. Many of the traditional O(n squared) algorithms required to provide the above features have been recast and are O(n) which support large mesh sizes. The DECIM8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent three dimensional (3D) models of geometry, scientific visualization results, and discretely sampled data. The algorithm uses a combined incremental and iterative strategy. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations.

  8. Variational Mesh Adaptation: Isotropy and Equidistribution

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang

    2001-12-01

    We present a new approach for developing more robust and error-oriented mesh adaptation methods. Specifically, assuming that a regular (in cell shape) and uniform (in cell size) computational mesh is used (as is commonly done in computation), we develop a criterion for mesh adaptation based on an error function whose definition is motivated by the analysis of function variation and local error behavior for linear interpolation. The criterion is then decomposed into two aspects, the isotropy (or conformity) and uniformity (or equidistribution) requirements, each of which can be easier to deal with. The functionals that satisfy these conditions approximately are constructed using discrete and continuous inequalities. A new functional is finally formulated by combining the functionals corresponding to the isotropy and uniformity requirements. The features of the functional are analyzed and demonstrated by numerical results. In particular, unlike the existing mesh adaptation functionals, the new functional has clear geometric meanings of minimization. A mesh that has the desired properties of isotropy and equidistribution can be obtained by properly choosing the values of two parameters. The analysis presented in this article also provides a better understanding of the increasingly popular method of harmonic mapping in two dimensions.

  9. An efficient 3D traveltime calculation using coarse-grid mesh for shallow-depth source

    NASA Astrophysics Data System (ADS)

    Son, Woohyun; Pyun, Sukjoon; Lee, Ho-Young; Koo, Nam-Hyung; Shin, Changsoo

    2016-10-01

    3D Kirchhoff pre-stack depth migration requires an efficient algorithm to compute first-arrival traveltimes. In this paper, we exploited a wave-equation-based traveltime calculation algorithm, which is called the suppressed wave equation estimation of traveltime (SWEET), and the equivalent source distribution (ESD) algorithm. The motivation of using the SWEET algorithm is to solve the Laplace-domain wave equation using coarse grid spacing to calculate first-arrival traveltimes. However, if a real source is located at shallow-depth close to free surface, we cannot accurately calculate the wavefield using coarse grid spacing. So, we need an additional algorithm to correctly simulate the shallow source even for the coarse grid mesh. The ESD algorithm is a method to define a set of distributed nodal sources that approximate a point source at the inter-nodal location in a velocity model with large grid spacing. Thanks to the ESD algorithm, we can efficiently calculate the first-arrival traveltimes of waves emitted from shallow source point even when we solve the Laplace-domain wave equation using a coarse-grid mesh. The proposed algorithm is applied to the SEG/EAGE 3D salt model. From the result, we note that the combination of SWEET and ESD algorithms can be successfully used for the traveltime calculation under the condition of a shallow-depth source. We also confirmed that our algorithm using coarse-grid mesh requires less computational time than the conventional SWEET algorithm using relatively fine-grid mesh.

  10. Adapting 3D Equilibrium Reconstruction to Reconstruct Weakly 3D H-mode Tokamaks

    NASA Astrophysics Data System (ADS)

    Cianciosa, M. R.; Hirshman, S. P.; Seal, S. K.; Unterberg, E. A.; Wilcox, R. S.; Wingen, A.; Hanson, J. D.

    2015-11-01

    The application of resonant magnetic perturbations for edge localized mode (ELM) mitigation breaks the toroidal symmetry of tokamaks. In these scenarios, the axisymmetric assumptions of the Grad-Shafranov equation no longer apply. By extension, equilibrium reconstruction tools, built around these axisymmetric assumptions, are insufficient to fully reconstruct a 3D perturbed equilibrium. 3D reconstruction tools typically work on systems where the 3D components of signals are a significant component of the input signals. In nominally axisymmetric systems, applied field perturbations can be on the order of 1% of the main field or less. To reconstruct these equilibria, the 3D component of signals must be isolated from the axisymmetric portions to provide the necessary information for reconstruction. This presentation will report on the adaptation to V3FIT for application on DIII-D H-mode discharges with applied resonant magnetic perturbations (RMPs). Newly implemented motional stark effect signals and modeling of electric field effects will also be discussed. Work supported under U.S. DOE Cooperative Agreement DE-AC05-00OR22725.

  11. An overset mesh approach for 3D mixed element high-order discretizations

    NASA Astrophysics Data System (ADS)

    Brazell, Michael J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2016-10-01

    A parallel high-order Discontinuous Galerkin (DG) method is used to solve the compressible Navier-Stokes equations in an overset mesh framework. The DG solver has many capabilities including: hp-adaption, curved cells, support for hybrid, mixed-element meshes, and moving meshes. Combining these capabilities with overset grids allows the DG solver to be used in problems with bodies in relative motion and in a near-body off-body solver strategy. The overset implementation is constructed to preserve the design accuracy of the baseline DG discretization. Multiple simulations are carried out to validate the accuracy and performance of the overset DG solver. These simulations demonstrate the capability of the high-order DG solver to handle complex geometry and large scale parallel simulations in an overset framework.

  12. Parallel object-oriented adaptive mesh refinement

    SciTech Connect

    Balsara, D.; Quinlan, D.J.

    1997-04-01

    In this paper we study adaptive mesh refinement (AMR) for elliptic and hyperbolic systems. We use the Asynchronous Fast Adaptive Composite Grid Method (AFACX), a parallel algorithm based upon the of Fast Adaptive Composite Grid Method (FAC) as a test case of an adaptive elliptic solver. For our hyperbolic system example we use TVD and ENO schemes for solving the Euler and MHD equations. We use the structured grid load balancer MLB as a tool for obtaining a load balanced distribution in a parallel environment. Parallel adaptive mesh refinement poses difficulties in expressing both the basic single grid solver, whether elliptic or hyperbolic, in a fashion that parallelizes seamlessly. It also requires that these basic solvers work together within the adaptive mesh refinement algorithm which uses the single grid solvers as one part of its adaptive solution process. We show that use of AMR++, an object-oriented library within the OVERTURE Framework, simplifies the development of AMR applications. Parallel support is provided and abstracted through the use of the P++ parallel array class.

  13. Curved Mesh Correction And Adaptation Tool to Improve COMPASS Electromagnetic Analyses

    SciTech Connect

    Luo, X.; Shephard, M.; Lee, L.Q.; Ng, C.; Ge, L.; /SLAC

    2011-11-14

    SLAC performs large-scale simulations for the next-generation accelerator design using higher-order finite elements. This method requires using valid curved meshes and adaptive mesh refinement in complex 3D curved domains to achieve its fast rate of convergence. ITAPS has developed a procedure to address those mesh requirements to enable petascale electromagnetic accelerator simulations by SLAC. The results demonstrate that those correct valid curvilinear meshes can not only make the simulation more reliable but also improve computational efficiency up to 30%. This paper presents a procedure to track moving adaptive mesh refinement in curved domains. The procedure is capable of generating suitable curvilinear meshes to enable large-scale accelerator simulations. The procedure can generate valid curved meshes with substantially fewer elements to improve the computational efficiency and reliability of the COMPASS electromagnetic analyses. Future work will focus on the scalable parallelization of all steps for petascale simulations.

  14. Insertion of 3-D-primitives in mesh-based representations: towards compact models preserving the details.

    PubMed

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu

    2010-07-01

    We propose an original hybrid modeling process of urban scenes that represents 3-D models as a combination of mesh-based surfaces and geometric 3-D-primitives. Meshes describe details such as ornaments and statues, whereas 3-D-primitives code for regular shapes such as walls and columns. Starting from an 3-D-surface obtained by multiview stereo techniques, these primitives are inserted into the surface after being detected. This strategy allows the introduction of semantic knowledge, the simplification of the modeling, and even correction of errors generated by the acquisition process. We design a hierarchical approach exploring different scales of an observed scene. Each level consists first in segmenting the surface using a multilabel energy model optimized by -expansion and then in fitting 3-D-primitives such as planes, cylinders or tori on the obtained partition where relevant. Experiments on real meshes, depth maps and synthetic surfaces show good potential for the proposed approach.

  15. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  16. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  17. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  18. Anisotropic mesh adaptation on Lagrangian Coherent Structures

    NASA Astrophysics Data System (ADS)

    Miron, Philippe; Vétel, Jérôme; Garon, André; Delfour, Michel; Hassan, Mouhammad El

    2012-08-01

    The finite-time Lyapunov exponent (FTLE) is extensively used as a criterion to reveal fluid flow structures, including unsteady separation/attachment surfaces and vortices, in laminar and turbulent flows. However, for large and complex problems, flow structure identification demands computational methodologies that are more accurate and effective. With this objective in mind, we propose a new set of ordinary differential equations to compute the flow map, along with its first (gradient) and second order (Hessian) spatial derivatives. We show empirically that the gradient of the flow map computed in this way improves the pointwise accuracy of the FTLE field. Furthermore, the Hessian allows for simple interpolation error estimation of the flow map, and the construction of a continuous optimal and multiscale Lp metric. The Lagrangian particles, or nodes, are then iteratively adapted on the flow structures revealed by this metric. Typically, the L1 norm provides meshes best suited to capturing small scale structures, while the L∞ norm provides meshes optimized to capture large scale structures. This means that the mesh density near large scale structures will be greater with the L∞ norm than with the L1 norm for the same mesh complexity, which is why we chose this technique for this paper. We use it to optimize the mesh in the vicinity of LCS. It is found that Lagrangian Coherent Structures are best revealed with the minimum number of vertices with the L∞ metric.

  19. Details of tetrahedral anisotropic mesh adaptation

    NASA Astrophysics Data System (ADS)

    Jensen, Kristian Ejlebjerg; Gorman, Gerard

    2016-04-01

    We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.

  20. Quality assessment of adaptive 3D video streaming

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Gutiérrez, Jesús; García, Narciso

    2013-03-01

    The streaming of 3D video contents is currently a reality to expand the user experience. However, because of the variable bandwidth of the networks used to deliver multimedia content, a smooth and high-quality playback experience could not always be guaranteed. Using segments in multiple video qualities, HTTP adaptive streaming (HAS) of video content is a relevant advancement with respect to classic progressive download streaming. Mainly, it allows resolving these issues by offering significant advantages in terms of both user-perceived Quality of Experience (QoE) and resource utilization for content and network service providers. In this paper we discuss the impact of possible HAS client's behavior while adapting to the network capacity on enduser. This has been done through an experiment of testing the end-user response to the quality variation during the adaptation procedure. The evaluation has been carried out through a subjective test of the end-user response to various possible clients' behaviors for increasing, decreasing, and oscillation of quality in 3D video. In addition, some of the HAS typical impairments during the adaptation has been simulated and their effects on the end-user perception are assessed. The experimental conclusions have made good insight into the user's response to different adaptation scenarios and visual impairments causing the visual discomfort that can be used to develop the adaptive streaming algorithm to improve the end-user experience.

  1. Electrostatic PIC with adaptive Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Kolobov, Vladimir; Arslanbekov, Robert

    2016-05-01

    We describe an initial implementation of an electrostatic Particle-in-Cell (ES-PIC) module with adaptive Cartesian mesh in our Unified Flow Solver framework. Challenges of PIC method with cell-based adaptive mesh refinement (AMR) are related to a decrease of the particle-per-cell number in the refined cells with a corresponding increase of the numerical noise. The developed ES-PIC solver is validated for capacitively coupled plasma, its AMR capabilities are demonstrated for simulations of streamer development during high-pressure gas breakdown. It is shown that cell-based AMR provides a convenient particle management algorithm for exponential multiplications of electrons and ions in the ionization events.

  2. Characterization of impact craters in 3D meshes using a feature lines approach

    NASA Astrophysics Data System (ADS)

    Jorda, L.; Mari, J.; Viseur, S.; Bouley, S.

    2013-12-01

    Impact craters are observed at the surface of most solar system bodies: terrestrial planets, satellites and asteroids.The measurement of their size-frequency distribution (SFD) is the only method available to estimate the age of the observed geological units, assuming a rate and velocity distributions of impactors and a crater scaling law. The age of the geological units is fundamental to establish a chronology of events explaining the global evolution of the surface. In addition, the detailed characterization of the crater properties (depth-to-diameter ratio and radial profile) yields a better understanding of the geological processes which altered the observed surfaces. Crater detection is usually performed manually directly from the acquired images. However, this method can become prohibitive when dealing with small craters extracted from very large data sets. A large number of solar system objects is being mapped at a very high spatial resolution by space probes since a few decades, emphasizing the need for new automatic methods of crater detection. Powerful computers are now available to produce and analyze huge 3D models of the surface in the form of 3D meshes containing tens to hundreds of billions of facets. This motivates the development of a new family of automatic crater detection algorithms (CDAs). The automatic CDAs developed so far were mainly based on morphological analyses and pattern recognition techniques on 2D images. Since a few years, new CDAs based on 3D models are being developed. Our objective is to develop and test against existing methods an automatic CDA using a new approach based on the discrete differential properties of 3D meshes. The method produces the feature lines (the crest and the ravine lines) lying on the surface. It is based on a double step algorithm: first, the regions of interest are flagged according to curvature properties, and then an original skeletonization approach is applied to extract the feature lines. This new

  3. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling. PMID:27200484

  4. 3DSEM++: Adaptive and intelligent 3D SEM surface reconstruction.

    PubMed

    Tafti, Ahmad P; Holz, Jessica D; Baghaie, Ahmadreza; Owen, Heather A; He, Max M; Yu, Zeyun

    2016-08-01

    Structural analysis of microscopic objects is a longstanding topic in several scientific disciplines, such as biological, mechanical, and materials sciences. The scanning electron microscope (SEM), as a promising imaging equipment has been around for decades to determine the surface properties (e.g., compositions or geometries) of specimens by achieving increased magnification, contrast, and resolution greater than one nanometer. Whereas SEM micrographs still remain two-dimensional (2D), many research and educational questions truly require knowledge and facts about their three-dimensional (3D) structures. 3D surface reconstruction from SEM images leads to remarkable understanding of microscopic surfaces, allowing informative and qualitative visualization of the samples being investigated. In this contribution, we integrate several computational technologies including machine learning, contrario methodology, and epipolar geometry to design and develop a novel and efficient method called 3DSEM++ for multi-view 3D SEM surface reconstruction in an adaptive and intelligent fashion. The experiments which have been performed on real and synthetic data assert the approach is able to reach a significant precision to both SEM extrinsic calibration and its 3D surface modeling.

  5. Efficient 3D geometric and Zernike moments computation from unstructured surface meshes.

    PubMed

    Pozo, José María; Villa-Uriol, Maria-Cruz; Frangi, Alejandro F

    2011-03-01

    This paper introduces and evaluates a fast exact algorithm and a series of faster approximate algorithms for the computation of 3D geometric moments from an unstructured surface mesh of triangles. Being based on the object surface reduces the computational complexity of these algorithms with respect to volumetric grid-based algorithms. In contrast, it can only be applied for the computation of geometric moments of homogeneous objects. This advantage and restriction is shared with other proposed algorithms based on the object boundary. The proposed exact algorithm reduces the computational complexity for computing geometric moments up to order N with respect to previously proposed exact algorithms, from N(9) to N(6). The approximate series algorithm appears as a power series on the rate between triangle size and object size, which can be truncated at any desired degree. The higher the number and quality of the triangles, the better the approximation. This approximate algorithm reduces the computational complexity to N(3). In addition, the paper introduces a fast algorithm for the computation of 3D Zernike moments from the computed geometric moments, with a computational complexity N(4), while the previously proposed algorithm is of order N(6). The error introduced by the proposed approximate algorithms is evaluated in different shapes and the cost-benefit ratio in terms of error, and computational time is analyzed for different moment orders. PMID:20714011

  6. 3D Game Content Distributed Adaptation in Heterogeneous Environments

    NASA Astrophysics Data System (ADS)

    Morán, Francisco; Preda, Marius; Lafruit, Gauthier; Villegas, Paulo; Berretty, Robert-Paul

    2007-12-01

    Most current multiplayer 3D games can only be played on a single dedicated platform (a particular computer, console, or cell phone), requiring specifically designed content and communication over a predefined network. Below we show how, by using signal processing techniques such as multiresolution representation and scalable coding for all the components of a 3D graphics object (geometry, texture, and animation), we enable online dynamic content adaptation, and thus delivery of the same content over heterogeneous networks to terminals with very different profiles, and its rendering on them. We present quantitative results demonstrating how the best displayed quality versus computational complexity versus bandwidth tradeoffs have been achieved, given the distributed resources available over the end-to-end content delivery chain. Additionally, we use state-of-the-art, standardised content representation and compression formats (MPEG-4 AFX, JPEG 2000, XML), enabling deployment over existing infrastructure, while keeping hooks to well-established practices in the game industry.

  7. Efficiency considerations in triangular adaptive mesh refinement.

    PubMed

    Behrens, Jörn; Bader, Michael

    2009-11-28

    Locally or adaptively refined meshes have been successfully applied to simulation applications involving multi-scale phenomena in the geosciences. In particular, for situations with complex geometries or domain boundaries, meshes with triangular or tetrahedral cells demonstrate their superior ability to accurately represent relevant realistic features. On the other hand, these methods require more complex data structures and are therefore less easily implemented, maintained and optimized. Acceptance in the Earth-system modelling community is still low. One of the major drawbacks is posed by indirect addressing due to unstructured or dynamically changing data structures and correspondingly lower efficiency of the related computations. In this paper, we will derive several strategies to circumvent the mentioned efficiency constraint. In particular, we will apply recent computational sciences methods in combination with results of classical mathematics (space-filling curves) in order to linearize the complex data and access structure.

  8. Efficient triangular adaptive meshes for tsunami simulations

    NASA Astrophysics Data System (ADS)

    Behrens, J.

    2012-04-01

    With improving technology and increased sensor density for accurate determination of tsunamogenic earthquake source parameters and consecutively uplift distribution, real-time simulations of even near-field tsunami hazard appears feasible in the near future. In order to support such efforts a new generation of tsunami models is currently under development. These models comprise adaptively refined meshes, in order to save computational resources (in areas of low wave activity) and still represent the inherently multi-scale behavior of a tsunami approaching coastal waters. So far, these methods have been based on oct-tree quadrilateral refinement. The method introduced here is based on binary tree refinement on triangular grids. By utilizing the structure stemming from the refinement strategy, a very efficient method can be achieved, with a triangular mesh, able to accurately represent complex boundaries.

  9. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  10. Integration of Mesh Optimization with 3D All-Hex Mesh Generation, LDRD Subcase 3504340000, Final Report

    SciTech Connect

    KNUPP,PATRICK; MITCHELL,SCOTT A.

    1999-11-01

    In an attempt to automatically produce high-quality all-hex meshes, we investigated a mesh improvement strategy: given an initial poor-quality all-hex mesh, we iteratively changed the element connectivity, adding and deleting elements and nodes, and optimized the node positions. We found a set of hex reconnection primitives. We improved the optimization algorithms so they can untangle a negative-Jacobian mesh, even considering Jacobians on the boundary, and subsequently optimize the condition number of elements in an untangled mesh. However, even after applying both the primitives and optimization we were unable to produce high-quality meshes in certain regions. Our experiences suggest that many boundary configurations of quadrilaterals admit no hexahedral mesh with positive Jacobians, although we have no proof of this.

  11. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  12. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  13. Fluidity: A New Adaptive, Unstructured Mesh Geodynamics Model

    NASA Astrophysics Data System (ADS)

    Davies, D. R.; Wilson, C. R.; Kramer, S. C.; Piggott, M. D.; Le Voci, G.; Collins, G. S.

    2010-05-01

    heterogeneities in mantle convection models. Incorporation of a suite of geodynamic benchmarks into the automated test-bed. These recent advances, which all work in combination with the parallel mesh-optimization technology, enable Fluidity to simulate geodynamical flows accurately and efficiently. Initial results will be presented from: (i) a range of 2-D and 3-D thermal convection benchmarks; kinematic and dynamic subduction zone simulations; (iii) Comparisons between model predictions and laboratory experiments of plume dynamics. These results all clearly demonstrate the benefits of adaptive, unstructured meshes for geodynamical flows.

  14. Visualization of adaptive mesh refinement data

    NASA Astrophysics Data System (ADS)

    Weber, Gunther H.; Hagen, Hans; Hamann, Bernd; Joy, Kenneth I.; Ligocki, Terry J.; Ma, Kwan-Liu; Shalf, John M.

    2001-05-01

    The complexity of physical phenomena often varies substantially over space and time. There can be regions where a physical phenomenon/quantity varies very little over a large extent. At the same time, there can be small regions where the same quantity exhibits highly complex variations. Adaptive mesh refinement (AMR) is a technique used in computational fluid dynamics to simulate phenomena with drastically varying scales concerning the complexity of the simulated variables. Using multiple nested grids of different resolutions, AMR combines the topological simplicity of structured-rectilinear grids, permitting efficient computational and storage, with the possibility to adapt grid resolutions in regions of complex behavior. We present methods for direct volume rendering of AMR data. Our methods utilize AMR grids directly for efficiency of the visualization process. We apply a hardware-accelerated rendering method to AMR data supporting interactive manipulation of color-transfer functions and viewing parameters. We also present a cell-projection-based rendering technique for AMR data.

  15. M3D Simulations of Energetic Particle-driven MHD Mode with Unstructured Mesh

    NASA Astrophysics Data System (ADS)

    Fu, G. Y.; Park, W.; Strauss, H. R.

    2001-10-01

    The energetic particle-driven MHD modes are studied using a multi-level extended MHD code M3D(W. Park et al., Phys. Plasmas 6, 1796 (1999)). In a Extended-MHD model, the plasma is divided into the bulk part and the energetic particle component. The bulk plasma is treated as either a single fluid or two fluids. The energetic particles are described by gyrokinetic particles following the self-consistent electromagnetic field. The model is self-consistent, including nonlinear effects of hot particles on the MHD dynamics and the nonlinear MHD mode coupling. Previously we had shown the results of nonlinear saturation of TAEfootnote G.Y. Fu and W. Park, Phys. Rev. Lett. 74, 1594 (1995), energetic particle stabilization of an internal kink and excitation of fishbone^2, and nonlinear saturation of fishbone in circular tokamaks (G.Y. Fu et al, 2000 Sherwood Meeting, Paper 2C2.). In this work, we extend the simulations to general geometry using unstructured mesh(H.R. Strauss and W. Park, Phys. Plasmas 5, 2676 (1998). We also use a gyrofluid model for fishbone in order to study the role of MHD nonlinearity in saturation near the marginal stability. Results of applications to tokamaks and spherical tokamaks will be presented.

  16. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  17. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  18. COSMOLOGICAL ADAPTIVE MESH REFINEMENT MAGNETOHYDRODYNAMICS WITH ENZO

    SciTech Connect

    Collins, David C.; Xu Hao; Norman, Michael L.; Li Hui; Li Shengtai

    2010-02-01

    In this work, we present EnzoMHD, the extension of the cosmological code Enzo to include the effects of magnetic fields through the ideal magnetohydrodynamics approximation. We use a higher order Godunov method for the computation of interface fluxes. We use two constrained transport methods to compute the electric field from those interface fluxes, which simultaneously advances the induction equation and maintains the divergence of the magnetic field. A second-order divergence-free reconstruction technique is used to interpolate the magnetic fields in the block-structured adaptive mesh refinement framework already extant in Enzo. This reconstruction also preserves the divergence of the magnetic field to machine precision. We use operator splitting to include gravity and cosmological expansion. We then present a series of cosmological and non-cosmological test problems to demonstrate the quality of solution resulting from this combination of solvers.

  19. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  20. Visualization Tools for Adaptive Mesh Refinement Data

    SciTech Connect

    Weber, Gunther H.; Beckner, Vincent E.; Childs, Hank; Ligocki,Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-05-09

    Adaptive Mesh Refinement (AMR) is a highly effective method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations that must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR visualization research and tools and describe how VisIt currently handles AMR data.

  1. Adaptive Mesh Refinement Simulations of Relativistic Binaries

    NASA Astrophysics Data System (ADS)

    Motl, Patrick M.; Anderson, M.; Lehner, L.; Olabarrieta, I.; Tohline, J. E.; Liebling, S. L.; Rahman, T.; Hirschman, E.; Neilsen, D.

    2006-09-01

    We present recent results from our efforts to evolve relativistic binaries composed of compact objects. We simultaneously solve the general relativistic hydrodynamics equations to evolve the material components of the binary and Einstein's equations to evolve the space-time. These two codes are coupled through an adaptive mesh refinement driver (had). One of the ultimate goals of this project is to address the merger of a neutron star and black hole and assess the possible observational signature of such systems as gamma ray bursts. This work has been supported in part by NSF grants AST 04-07070 and PHY 03-26311 and in part through NASA's ATP program grant NAG5-13430. The computations were performed primarily at NCSA through grant MCA98N043 and at LSU's Center for Computation & Technology.

  2. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  3. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  4. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  5. Extraction of "best fit circles" on 3D meshes based on discrete curvatures: application to impact craters detection

    NASA Astrophysics Data System (ADS)

    Beguet, Florian; Bali, Sarah; Christoff, Nicole; Jorda, Laurent; Viseur, Sophie; Bouley, Sylvain; Manolova, Agata; Mari, Jean-Luc

    2016-04-01

    Impact craters is a typical feature observed at the surface of most bodies in the solar system: terrestrial planets, their satellites, asteroids and even possibly cometary nuclei exhibit impact craters. Their spatial density yields the estimation of the age of the surface, a key parameter required for subsequent geological studies. With the development of interplanetary missions, a large number of solar system objects have been mapped at a high spatial resolution, emphasizing the need for new automatic methods of crater detection and counting. In this work, we present such a method using a new approach based on the analysis of reconstructed 3D meshes instead of 2D images. The robust extraction of feature areas on surface objects embedded in 3D, like circular shapes, is a challenging problem. Classical approaches generally rely on image processing and template matching on a 2D flat projection of the 3D object (for instance a high-resolution picture). In this paper, we propose a full 3D method that mainly relies on curvature analysis. Mean and Gaussian curvatures are estimated on the surface. They are used to label vertices that belong to concave parts corresponding to specific pits on the surface. Centers are located in the targeted surface regions, corresponding to potential crater features. Then "best fit circles" are extracted, based on the rims of the circular shapes. They consist in closed lines exclusively composed of edges of the initial mesh. This approach has been applied to the detection of craters on the asteroid Vesta. Keywords: geometric modeling, 3D meshes, shape recognition, mesh processing, discrete curvatures, asteroids, crater detection, geology, geomorphology.

  6. Adaptive mesh refinement techniques for electrical impedance tomography.

    PubMed

    Molinari, M; Cox, S J; Blott, B H; Daniell, G J

    2001-02-01

    Adaptive mesh refinement techniques can be applied to increase the efficiency of electrical impedance tomography reconstruction algorithms by reducing computational and storage cost as well as providing problem-dependent solution structures. A self-adaptive refinement algorithm based on an a posteriori error estimate has been developed and its results are shown in comparison with uniform mesh refinement for a simple head model.

  7. Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations

    NASA Technical Reports Server (NTRS)

    Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.

    2006-01-01

    Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.

  8. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  9. DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks

    NASA Astrophysics Data System (ADS)

    Duffell, Paul C.

    2016-09-01

    This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.

  10. Phase-Accuracy Comparisons and Improved Far-Field Estimates for 3-D Edge Elements on Tetrahedral Meshes

    NASA Astrophysics Data System (ADS)

    Monk, Peter; Parrott, Kevin

    2001-07-01

    Edge-element methods have proved very effective for 3-D electromagnetic computations and are widely used on unstructured meshes. However, the accuracy of standard edge elements can be criticised because of their low order. This paper analyses discrete dispersion relations together with numerical propagation accuracy to determine the effect of tetrahedral shape on the phase accuracy of standard 3-D edge-element approximations in comparison to other methods. Scattering computations for the sphere obtained with edge elements are compared with results obtained with vertex elements, and a new formulation of the far-field integral approximations for use with edge elements is shown to give improved cross sections over conventional formulations.

  11. Adaptive surface meshing and multiresolution terrain depiction for SVS

    NASA Astrophysics Data System (ADS)

    Wiesemann, Thorsten; Schiefele, Jens; Kubbat, Wolfgang

    2001-08-01

    Many of today's and tomorrow's aviation applications demand accurate and reliable digital terrain elevation databases. Particularly future Vertical Cut Displays or 3D Synthetic Vision Systems (SVS) require accurate and hi-resolution data to offer a reliable terrain depiction. On the other hand, optimized or reduced terrain models are necessary to ensure real-time rendering and computing performance. In this paper a new method for adaptive terrain meshing and depiction for SVS is presented. The initial data set is decomposed by using a wavelet transform. By examining the wavelet coefficients, an adaptive surface approximation for various Level-of-Detail is determined. Additionally, the dyadic scaling of the wavelet transform is used to build a hierarchical quad-tree representation for the terrain data. This representation enhances fast interactive computations and real-time rendering methods. The proposed terrain representation is integrated into a standard navigation display. Due to the multi-resolution data organization, terrain depiction e.g. resolution is adaptive to a selected zooming level or flight phase. Moreover, the wavelet decomposition helps to define local regions of interest. A depicted terrain resolution has a finer grain nearby the current airplane position and gets coarser with increasing aircraft distance. In addition, flight critical regions can be depicted in a higher resolution.

  12. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  13. Anisotropic Mesh Adaptivity for Turbulent Flows with Boundary Layers

    NASA Astrophysics Data System (ADS)

    Chitale, Kedar C.

    Turbulent flows are found everywhere in nature and are studied, analyzed and simulated using various experimental and numerical tools. For computational analysis, a variety of turbulence models are available and the accuracy of these models in capturing the phenomenon depends largely on the mesh spacings, especially near the walls, in the boundary layer region. Special semi-structured meshes called "mesh boundary layers" are widely used in the CFD community in simulations of turbulent flows, because of their graded and orthogonal layered structure. They provide an efficient way to achieve very fine and highly anisotropic mesh spacings without introducing poorly shaped elements. Since usually the required mesh spacings to accurately resolve the flow are not known a priori to the simulations, an adaptive approach based on a posteriori error indicators is used to achieve an appropriate mesh. In this study, we apply the adaptive meshing techniques to turbulent flows with a focus on boundary layers. We construct a framework to calculate the critical wall normal mesh spacings inside the boundary layers based on the flow physics and the knowledge of the turbulence model. This approach is combined with numerical error indicators to adapt the entire flow region. We illustrate the effectiveness of this hybrid approach by applying it to three aerodynamic flows and studying their superior performance in capturing the flow structures in detail. We also demonstrate the capabilities of the current developments in parallel boundary layer mesh adaptation by applying them to two internal flow problems. We also study the application of adaptive boundary layer meshes to complex geometries like multi element wings. We highlight the advantage of using such techniques for superior wake and tip region resolution by showcasing flow results. We also outline the future direction for the adaptive meshing techniques to be useful to the large scale flow computations.

  14. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  15. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  16. Numerical modeling of seismic waves using frequency-adaptive meshes

    NASA Astrophysics Data System (ADS)

    Hu, Jinyin; Jia, Xiaofeng

    2016-08-01

    An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.

  17. Adaptive-mesh algorithms for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Roe, Philip L.; Quirk, James

    1993-01-01

    The basic goal of adaptive-mesh algorithms is to distribute computational resources wisely by increasing the resolution of 'important' regions of the flow and decreasing the resolution of regions that are less important. While this goal is one that is worthwhile, implementing schemes that have this degree of sophistication remains more of an art than a science. In this paper, the basic pieces of adaptive-mesh algorithms are described and some of the possible ways to implement them are discussed and compared. These basic pieces are the data structure to be used, the generation of an initial mesh, the criterion to be used to adapt the mesh to the solution, and the flow-solver algorithm on the resulting mesh. Each of these is discussed, with particular emphasis on methods suitable for the computation of compressible flows.

  18. Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  19. Serial and parallel dynamic adaptation of general hybrid meshes

    NASA Astrophysics Data System (ADS)

    Kavouklis, Christos

    The Navier-Stokes equations are a standard mathematical representation of viscous fluid flow. Their numerical solution in three dimensions remains a computationally intensive and challenging task, despite recent advances in computer speed and memory. A strategy to increase accuracy of Navier-Stokes simulations, while maintaining computing resources to a minimum, is local refinement of the associated computational mesh in regions of large solution gradients and coarsening in regions where the solution does not vary appreciably. In this work we consider adaptation of general hybrid meshes for Computational Fluid Dynamics (CFD) applications. Hybrid meshes are composed of four types of elements; hexahedra, prisms, pyramids and tetrahedra, and have been proven a promising technology in accurately resolving fluid flow for complex geometries. The first part of this dissertation is concerned with the design and implementation of a serial scheme for the adaptation of general three dimensional hybrid meshes. We have defined 29 refinement types, for all four kinds of elements. The core of the present adaptation scheme is an iterative algorithm that flags mesh edges for refinement, so that the adapted mesh is conformal. Of primary importance is considered the design of a suitable dynamic data structure that facilitates refinement and coarsening operations and furthermore minimizes memory requirements. A special dynamic list is defined for mesh elements, in contrast with the usual tree structures. It contains only elements of the current adaptation step and minimal information that is utilized to reconstruct parent elements when the mesh is coarsened. In the second part of this work, a new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid meshes is presented. Partitioning of a hybrid mesh reduces to partitioning of the corresponding dual graph. Communication among processors is based on the faces of the interpartition boundary. The distributed

  20. Brief communication: Impact of mesh resolution for MISMIP and MISMIP3d experiments using Elmer/Ice

    NASA Astrophysics Data System (ADS)

    Gagliardini, O.; Brondex, J.; Gillet-Chaulet, F.; Tavard, L.; Peyaud, V.; Durand, G.

    2016-02-01

    The dynamical contribution of marine ice sheets to sea level rise is largely controlled by grounding line (GL) dynamics. Two marine ice sheet model intercomparison exercises, namely MISMIP and MISMIP3d, have been proposed to the community to test and compare the ability of models to capture the GL dynamics. Both exercises are known to present a discontinuity of the friction at the GL, which is believed to increase the model sensitivity to mesh resolution. Here, using Elmer/Ice, the only Stokes model which completed both intercomparisons, the sensitivity to the mesh resolution is studied from an extended MISMIP experiment in which the friction continuously decreases over a transition distance and equals zero at the GL. Using this MISMIP-like setup, it is shown that the sensitivity to the mesh resolution is not improved for a vanishing friction at the GL. For the original MISMIP experiment, i.e. for a discontinuous friction at the GL, we further show that the results are moreover very sensitive to the way the friction is interpolated in the close vicinity of the GL. In the light of these new insights, and thanks to increased computing resources, new results for the MISMIP3d experiments obtained for higher resolutions than previously published are made available for future comparisons as the Supplement.

  1. Drag Prediction for the NASA CRM Wing-Body-Tail Using CFL3D and OVERFLOW on an Overset Mesh

    NASA Technical Reports Server (NTRS)

    Sclafani, Anthony J.; DeHaan, Mark A.; Vassberg, John C.; Rumsey, Christopher L.; Pulliam, Thomas H.

    2010-01-01

    In response to the fourth AIAA CFD Drag Prediction Workshop (DPW-IV), the NASA Common Research Model (CRM) wing-body and wing-body-tail configurations are analyzed using the Reynolds-averaged Navier-Stokes (RANS) flow solvers CFL3D and OVERFLOW. Two families of structured, overset grids are built for DPW-IV. Grid Family 1 (GF1) consists of a coarse (7.2 million), medium (16.9 million), fine (56.5 million), and extra-fine (189.4 million) mesh. Grid Family 2 (GF2) is an extension of the first and includes a superfine (714.2 million) and an ultra-fine (2.4 billion) mesh. The medium grid anchors both families with an established build process for accurate cruise drag prediction studies. This base mesh is coarsened and enhanced to form a set of parametrically equivalent grids that increase in size by a factor of roughly 3.4 from one level to the next denser level. Both CFL3D and OVERFLOW are run on GF1 using a consistent numerical approach. Additional OVERFLOW runs are made to study effects of differencing scheme and turbulence model on GF1 and to obtain results for GF2. All CFD results are post-processed using Richardson extrapolation, and approximate grid-converged values of drag are compared. The medium grid is also used to compute a trimmed drag polar for both codes.

  2. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  3. 3D active shape models of human brain structures: application to patient-specific mesh generation

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.

    2015-03-01

    The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.

  4. Meshing Preprocessor for the Mesoscopic 3D Finite Element Simulation of 2D and Interlock Fabric Deformation

    NASA Astrophysics Data System (ADS)

    Wendling, A.; Daniel, J. L.; Hivet, G.; Vidal-Sallé, E.; Boisse, P.

    2015-12-01

    Numerical simulation is a powerful tool to predict the mechanical behavior and the feasibility of composite parts. Among the available numerical approaches, as far as woven reinforced composites are concerned, 3D finite element simulation at the mesoscopic scale leads to a good compromise between realism and complexity. At this scale, the fibrous reinforcement is modeled by an interlacement of yarns assumed to be homogeneous that have to be accurately represented. Among the numerous issues induced by these simulations, the first one consists in providing a representative meshed geometrical model of the unit cell at the mesoscopic scale. The second one consists in enabling a fast data input in the finite element software (contacts definition, boundary conditions, elements reorientation, etc.) so as to obtain results within reasonable time. Based on parameterized 3D CAD modeling tool of unit-cells of dry fabrics already developed, this paper presents an efficient strategy which permits an automated meshing of the models with 3D hexahedral elements and to accelerate of several orders of magnitude the simulation data input. Finally, the overall modeling strategy is illustrated by examples of finite element simulation of the mechanical behavior of fabrics.

  5. An Efficient Dynamically Adaptive Mesh for Potentially Singular Solutions

    NASA Astrophysics Data System (ADS)

    Ceniceros, Hector D.; Hou, Thomas Y.

    2001-09-01

    We develop an efficient dynamically adaptive mesh generator for time-dependent problems in two or more dimensions. The mesh generator is motivated by the variational approach and is based on solving a new set of nonlinear elliptic PDEs for the mesh map. When coupled to a physical problem, the mesh map evolves with the underlying solution and maintains high adaptivity as the solution develops complicated structures and even singular behavior. The overall mesh strategy is simple to implement, avoids interpolation, and can be easily incorporated into a broad range of applications. The efficacy of the mesh is first demonstrated by two examples of blowing-up solutions to the 2-D semilinear heat equation. These examples show that the mesh can follow with high adaptivity a finite-time singularity process. The focus of applications presented here is however the baroclinic generation of vorticity in a strongly layered 2-D Boussinesq fluid, a challenging problem. The moving mesh follows effectively the flow resolving both its global features and the almost singular shear layers developed dynamically. The numerical results show the fast collapse to small scales and an exponential vorticity growth.

  6. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  7. Turbulent flow calculations using unstructured and adaptive meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1990-01-01

    A method of efficiently computing turbulent compressible flow over complex two dimensional configurations is presented. The method makes use of fully unstructured meshes throughout the entire flow-field, thus enabling the treatment of arbitrarily complex geometries and the use of adaptive meshing techniques throughout both viscous and inviscid regions of flow-field. Mesh generation is based on a locally mapped Delaunay technique in order to generate unstructured meshes with highly-stretched elements in the viscous regions. The flow equations are discretized using a finite element Navier-Stokes solver, and rapid convergence to steady-state is achieved using an unstructured multigrid algorithm. Turbulence modeling is performed using an inexpensive algebraic model, implemented for use on unstructured and adaptive meshes. Compressible turbulent flow solutions about multiple-element airfoil geometries are computed and compared with experimental data.

  8. Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D

    SciTech Connect

    Pavel Kus; Pavel Solin; David Andrs

    2014-11-01

    In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.

  9. Adaptive Mesh Enrichment for the Poisson-Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Dyshlovenko, Pavel

    2001-09-01

    An adaptive mesh enrichment procedure for a finite-element solution of the two-dimensional Poisson-Boltzmann equation is described. The mesh adaptation is performed by subdividing the cells using information obtained in the previous step of the solution and next rearranging the mesh to be a Delaunay triangulation. The procedure allows the gradual improvement of the quality of the solution and adjustment of the geometry of the problem. The performance of the proposed approach is illustrated by applying it to the problem of two identical colloidal particles in a symmetric electrolyte.

  10. Compressible Magma/Mantle Dynamics: 3d, Adaptive Simulations in ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo

    2016-09-01

    Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different time scales make it difficult to study mantle convection and melt migration in a unified framework, especially for three-dimensional, global models. And although experiments suggest an increase in melt volume of up to 20% from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high resolution, 3d, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.

  11. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  12. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  13. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  14. A Case Study of Communication Optimizations on 3D Mesh Interconnects

    NASA Astrophysics Data System (ADS)

    Bhatelé, Abhinav; Bohm, Eric; Kalé, Laxmikant V.

    Optimal network performance is critical to efficient parallel scaling for communication-bound applications on large machines. With wormhole routing, no-load latencies do not increase significantly with number of hops traveled. Yet, we, and others have recently shown that in presence of contention, message latencies can grow substantially large. Hence task mapping strategies should take the topology of the machine into account on large machines. In this paper, we present topology aware mapping as a technique to optimize communication on 3-dimensional mesh interconnects and hence improve performance.

  15. Parallel adaptive mesh refinement techniques for plasticity problems

    NASA Technical Reports Server (NTRS)

    Barry, W. J.; Jones, M. T.; Plassmann, P. E.

    1997-01-01

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  16. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. Unfortunately, an efficient parallel implementation is difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. To address this problem, we have developed PLUM, an automatic portable framework for performing adaptive large-scale numerical computations in a message-passing environment. First, we present an efficient parallel implementation of a tetrahedral mesh adaption scheme. Extremely promising parallel performance is achieved for various refinement and coarsening strategies on a realistic-sized domain. Next we describe PLUM, a novel method for dynamically balancing the processor workloads in adaptive grid computations. This research includes interfacing the parallel mesh adaption procedure based on actual flow solutions to a data remapping module, and incorporating an efficient parallel mesh repartitioner. A significant runtime improvement is achieved by observing that data movement for a refinement step should be performed after the edge-marking phase but before the actual subdivision. We also present optimal and heuristic remapping cost metrics that can accurately predict the total overhead for data redistribution. Several experiments are performed to verify the effectiveness of PLUM on sequences of dynamically adapted unstructured grids. Portability is demonstrated by presenting results on the two vastly different architectures of the SP2 and the Origin2OOO. Additionally, we evaluate the performance of five state-of-the-art partitioning algorithms that can be used within PLUM. It is shown that for certain classes of unsteady adaption, globally repartitioning the computational mesh produces higher quality results than diffusive repartitioning schemes. We also demonstrate that a coarse starting mesh produces high quality load balancing, at

  17. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  18. Structured Adaptive Mesh Refinement Application Infrastructure

    SciTech Connect

    2010-07-15

    SAMRAI is an object-oriented support library for structured adaptice mesh refinement (SAMR) simulation of computational science problems, modeled by systems of partial differential equations (PDEs). SAMRAI is developed and maintained in the Center for Applied Scientific Computing (CASC) under ASCI ITS and PSE support. SAMRAI is used in a variety of application research efforts at LLNL and in academia. These applications are developed in collaboration with SAMRAI development team members.

  19. Efficient simulation of three-dimensional anisotropic cardiac tissue using an adaptive mesh refinement method.

    PubMed

    Cherry, Elizabeth M; Greenside, Henry S; Henriquez, Craig S

    2003-09-01

    A recently developed space-time adaptive mesh refinement algorithm (AMRA) for simulating isotropic one- and two-dimensional excitable media is generalized to simulate three-dimensional anisotropic media. The accuracy and efficiency of the algorithm is investigated for anisotropic and inhomogeneous 2D and 3D domains using the Luo-Rudy 1 (LR1) and FitzHugh-Nagumo models. For a propagating wave in a 3D slab of tissue with LR1 membrane kinetics and rotational anisotropy comparable to that found in the human heart, factors of 50 and 30 are found, respectively, for the speedup and for the savings in memory compared to an algorithm using a uniform space-time mesh at the finest resolution of the AMRA method. For anisotropic 2D and 3D media, we find no reduction in accuracy compared to a uniform space-time mesh. These results suggest that the AMRA will be able to simulate the 3D electrical dynamics of canine ventricles quantitatively for 1 s using 32 1-GHz Alpha processors in approximately 9 h.

  20. Adaptive mesh generation for viscous flows using Delaunay triangulation

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1990-01-01

    A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.

  1. Adaptive mesh generation for viscous flows using Delaunay triangulation

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1988-01-01

    A method for generating an unstructured triangular mesh in two dimensions, suitable for computing high Reynolds number flows over arbitrary configurations is presented. The method is based on a Delaunay triangulation, which is performed in a locally stretched space, in order to obtain very high aspect ratio triangles in the boundary layer and the wake regions. It is shown how the method can be coupled with an unstructured Navier-Stokes solver to produce a solution adaptive mesh generation procedure for viscous flows.

  2. Parallel adaptation of general three-dimensional hybrid meshes

    SciTech Connect

    Kavouklis, Christos Kallinderis, Yannis

    2010-05-01

    A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.

  3. Object-oriented philosophy in designing adaptive finite-element package for 3D elliptic deferential equations

    NASA Astrophysics Data System (ADS)

    Zhengyong, R.; Jingtian, T.; Changsheng, L.; Xiao, X.

    2007-12-01

    Although adaptive finite-element (AFE) analysis is becoming more and more focused in scientific and engineering fields, its efficient implementations are remain to be a discussed problem as its more complex procedures. In this paper, we propose a clear C++ framework implementation to show the powerful properties of Object-oriented philosophy (OOP) in designing such complex adaptive procedure. In terms of the modal functions of OOP language, the whole adaptive system is divided into several separate parts such as the mesh generation or refinement, a-posterior error estimator, adaptive strategy and the final post processing. After proper designs are locally performed on these separate modals, a connected framework of adaptive procedure is formed finally. Based on the general elliptic deferential equation, little efforts should be added in the adaptive framework to do practical simulations. To show the preferable properties of OOP adaptive designing, two numerical examples are tested. The first one is the 3D direct current resistivity problem in which the powerful framework is efficiently shown as only little divisions are added. And then, in the second induced polarization£¨IP£©exploration case, new adaptive procedure is easily added which adequately shows the strong extendibility and re-usage of OOP language. Finally we believe based on the modal framework adaptive implementation by OOP methodology, more advanced adaptive analysis system will be available in future.

  4. MHD simulations on an unstructured mesh

    SciTech Connect

    Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Longcope, D.W.; Sugiyama, L.E.

    1998-12-31

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.

  5. Registration of 3D point clouds and meshes: a survey from rigid to nonrigid.

    PubMed

    Tam, Gary K L; Cheng, Zhi-Quan; Lai, Yu-Kun; Langbein, Frank C; Liu, Yonghuai; Marshall, David; Martin, Ralph R; Sun, Xian-Fang; Rosin, Paul L

    2013-07-01

    Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.

  6. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    SciTech Connect

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner for scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.

  7. Light sheet adaptive optics microscope for 3D live imaging

    NASA Astrophysics Data System (ADS)

    Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

    2013-02-01

    We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

  8. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  9. Robust Adaptive 3-D Segmentation of Vessel Laminae From Fluorescence Confocal Microscope Images and Parallel GPU Implementation

    PubMed Central

    Narayanaswamy, Arunachalam; Dwarakapuram, Saritha; Bjornsson, Christopher S.; Cutler, Barbara M.; Shain, William

    2010-01-01

    This paper presents robust 3-D algorithms to segment vasculature that is imaged by labeling laminae, rather than the lumenal volume. The signal is weak, sparse, noisy, nonuniform, low-contrast, and exhibits gaps and spectral artifacts, so adaptive thresholding and Hessian filtering based methods are not effective. The structure deviates from a tubular geometry, so tracing algorithms are not effective. We propose a four step approach. The first step detects candidate voxels using a robust hypothesis test based on a model that assumes Poisson noise and locally planar geometry. The second step performs an adaptive region growth to extract weakly labeled and fine vessels while rejecting spectral artifacts. To enable interactive visualization and estimation of features such as statistical confidence, local curvature, local thickness, and local normal, we perform the third step. In the third step, we construct an accurate mesh representation using marching tetrahedra, volume-preserving smoothing, and adaptive decimation algorithms. To enable topological analysis and efficient validation, we describe a method to estimate vessel centerlines using a ray casting and vote accumulation algorithm which forms the final step of our algorithm. Our algorithm lends itself to parallel processing, and yielded an 8× speedup on a graphics processor (GPU). On synthetic data, our meshes had average error per face (EPF) values of (0.1–1.6) voxels per mesh face for peak signal-to-noise ratios from (110–28 dB). Separately, the error from decimating the mesh to less than 1% of its original size, the EPF was less than 1 voxel/face. When validated on real datasets, the average recall and precision values were found to be 94.66% and 94.84%, respectively. PMID:20199906

  10. Kinetic solvers with adaptive mesh in phase space.

    PubMed

    Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A

    2013-12-01

    An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.

  11. Adaptive anisotropic meshing for steady convection-dominated problems

    SciTech Connect

    Nguyen, Hoa; Gunzburger, Max; Ju, Lili; Burkardt, John

    2009-01-01

    Obtaining accurate solutions for convection–diffusion equations is challenging due to the presence of layers when convection dominates the diffusion. To solve this problem, we design an adaptive meshing algorithm which optimizes the alignment of anisotropic meshes with the numerical solution. Three main ingredients are used. First, the streamline upwind Petrov–Galerkin method is used to produce a stabilized solution. Second, an adapted metric tensor is computed from the approximate solution. Third, optimized anisotropic meshes are generated from the computed metric tensor by an anisotropic centroidal Voronoi tessellation algorithm. Our algorithm is tested on a variety of two-dimensional examples and the results shows that the algorithm is robust in detecting layers and efficient in avoiding non-physical oscillations in the numerical approximation.

  12. Adaptive mesh refinement for shocks and material interfaces

    SciTech Connect

    Dai, William Wenlong

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  13. An adaptive level set segmentation on a triangulated mesh.

    PubMed

    Xu, Meihe; Thompson, Paul M; Toga, Arthur W

    2004-02-01

    Level set methods offer highly robust and accurate methods for detecting interfaces of complex structures. Efficient techniques are required to transform an interface to a globally defined level set function. In this paper, a novel level set method based on an adaptive triangular mesh is proposed for segmentation of medical images. Special attention is paid to an adaptive mesh refinement and redistancing technique for level set propagation, in order to achieve higher resolution at the interface with minimum expense. First, a narrow band around the interface is built in an upwind fashion. An active square technique is used to determine the shortest distance correspondence (SDC) for each grid vertex. Simultaneously, we also give an efficient approach for signing the distance field. Then, an adaptive improvement algorithm is proposed, which essentially combines two basic techniques: a long-edge-based vertex insertion strategy, and a local improvement. These guarantee that the refined triangulation is related to features along the front and has elements with appropriate size and shape, which fit the front well. We propose a short-edge elimination scheme to coarsen the refined triangular mesh, in order to reduce the extra storage. Finally, we reformulate the general evolution equation by updating 1) the velocities and 2) the gradient of level sets on the triangulated mesh. We give an approach for tracing contours from the level set on the triangulated mesh. Given a two-dimensional image with N grids along a side, the proposed algorithms run in O(kN) time at each iteration. Quantitative analysis shows that our algorithm is of first order accuracy; and when the interface-fitted property is involved in the mesh refinement, both the convergence speed and numerical accuracy are greatly improved. We also analyze the effect of redistancing frequency upon convergence speed and accuracy. Numerical examples include the extraction of inner and outer surfaces of the cerebral cortex

  14. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  15. Adaptive mesh refinement for 1-dimensional gas dynamics

    SciTech Connect

    Hedstrom, G.; Rodrigue, G.; Berger, M.; Oliger, J.

    1982-01-01

    We consider the solution of the one-dimensional equation of gas-dynamics. Accurate numerical solutions are difficult to obtain on a given spatial mesh because of the existence of physical regions where components of the exact solution are either discontinuous or have large gradient changes. Numerical methods treat these phenomena in a variety of ways. In this paper, the method of adaptive mesh refinement is used. A thorough description of this method for general hyperbolic systems is given elsewhere and only properties of the method pertinent to the system are elaborated.

  16. Advanced 3D mesh manipulation in stereolithographic files and post-print processing for the manufacturing of patient-specific vascular flow phantoms

    NASA Astrophysics Data System (ADS)

    O'Hara, Ryan P.; Chand, Arpita; Vidiyala, Sowmya; Arechavala, Stacie M.; Mitsouras, Dimitrios; Rudin, Stephen; Ionita, Ciprian N.

    2016-03-01

    Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments, test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry. For post-print surface optimization, we investigated various solutions and methods to remove support material and smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh manipulation methods decrease the phantom design time by 30 - 80% and allow for rapid development of accurate vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods presented in this work could lead to shorter design time for patient specific phantoms and better physiological simulations.

  17. PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.

  18. Electro-bending characterization of adaptive 3D fiber reinforced plastics based on shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ashir, Moniruddoza; Hahn, Lars; Kluge, Axel; Nocke, Andreas; Cherif, Chokri

    2016-03-01

    The industrial importance of fiber reinforced plastics (FRPs) is growing steadily in recent years, which are mostly used in different niche products, has been growing steadily in recent years. The integration of sensors and actuators in FRP is potentially valuable for creating innovative applications and therefore the market acceptance of adaptive FRP is increasing. In particular, in the field of highly stressed FRP, structural integrated systems for continuous component parts monitoring play an important role. This presented work focuses on the electro-mechanical characterization of adaptive three-dimensional (3D)FRP with integrated textile-based actuators. Here, the friction spun hybrid yarn, consisting of shape memory alloy (SMA) in wire form as core, serves as an actuator. Because of the shape memory effect, the SMA-hybrid yarn returns to its original shape upon heating that also causes the deformation of adaptive 3D FRP. In order to investigate the influences of the deformation behavior of the adaptive 3D FRP, investigations in this research are varied according to the structural parameters such as radius of curvature of the adaptive 3D FRP, fabric types and number of layers of the fabric in the composite. Results show that reproducible deformations can be realized with adaptive 3D FRP and that structural parameters have a significant impact on the deformation capability.

  19. Adaptive 3D single-block grids for the computation of viscous flows around wings

    SciTech Connect

    Hagmeijer, R.; Kok, J.C.

    1996-12-31

    A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.

  20. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  1. GENSURF: A mesh generator for 3D finite element analysis of surface and corner cracks in finite thickness plates subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.

    1992-01-01

    A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.

  2. Optimal imaging with adaptive mesh refinement in electrical impedance tomography.

    PubMed

    Molinari, Marc; Blott, Barry H; Cox, Simon J; Daniell, Geoffrey J

    2002-02-01

    In non-linear electrical impedance tomography the goodness of fit of the trial images is assessed by the well-established statistical chi2 criterion applied to the measured and predicted datasets. Further selection from the range of images that fit the data is effected by imposing an explicit constraint on the form of the image, such as the minimization of the image gradients. In particular, the logarithm of the image gradients is chosen so that conductive and resistive deviations are treated in the same way. In this paper we introduce the idea of adaptive mesh refinement to the 2D problem so that the local scale of the mesh is always matched to the scale of the image structures. This improves the reconstruction resolution so that the image constraint adopted dominates and is not perturbed by the mesh discretization. The avoidance of unnecessary mesh elements optimizes the speed of reconstruction without degrading the resulting images. Starting with a mesh scale length of the order of the electrode separation it is shown that, for data obtained at presently achievable signal-to-noise ratios of 60 to 80 dB, one or two refinement stages are sufficient to generate high quality images.

  3. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  4. Adaptation of Block-Structured Adaptive Mesh Refinement to Particle-In-Cell simulations

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Colella, Phillip; McCorquodale, Peter; Friedman, Alex; Grote, Dave

    2001-10-01

    Particle-In-Cell (PIC) methods which solve the Maxwell equations (or a simplification) on a regular Cartesian grid are routinely used to simulate plasma and particle beam systems. Several techniques have been developed to accommodate irregular boundaries and scale variations. We describe here an ongoing effort to adapt the block-structured Adaptive Mesh Refinement (AMR) algorithm (http://seesar.lbl.gov/AMR/) to the Particle-In-Cell method. The AMR technique connects grids having different resolutions, using interpolation. Special care has to be taken to avoid the introduction of spurious forces close to the boundary of the inner, high-resolution grid, or at least to reduce such forces to an acceptable level. The Berkeley AMR library CHOMBO has been modified and coupled to WARP3d (D.P. Grote et al., Fusion Engineering and Design), 32-33 (1996), 193-200, a PIC code which is used for the development of high current accelerators for Heavy Ion Fusion. The methods and preliminary results will be presented.

  5. Development of three-dimensional hydrodynamical and MHD codes using Adaptive Mesh Refinement scheme with TVD

    NASA Astrophysics Data System (ADS)

    den, M.; Yamashita, K.; Ogawa, T.

    A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.

  6. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  7. A Diffusion Synthetic Acceleration Method for Block Adaptive Mesh Refinement.

    SciTech Connect

    Ward, R. C.; Baker, R. S.; Morel, J. E.

    2005-01-01

    A prototype two-dimensional Diffusion Synthetic Acceleration (DSA) method on a Block-based Adaptive Mesh Refinement (BAMR) transport mesh has been developed. The Block-Adaptive Mesh Refinement Diffusion Synthetic Acceleration (BAMR-DSA) method was tested in the PARallel TIme-Dependent SN (PARTISN) deterministic transport code. The BAMR-DSA equations are derived by differencing the DSA equation using a vertex-centered diffusion discretization that is diamond-like and may be characterized as 'partially' consistent. The derivation of a diffusion discretization that is fully consistent with diamond transport differencing on BAMR mesh does not appear to be possible. However, despite being partially consistent, the BAMR-DSA method is effective for many applications. The BAMR-DSA solver was implemented and tested in two dimensions for rectangular (XY) and cylindrical (RZ) geometries. Testing results confirm that a partially consistent BAMR-DSA method will introduce instabilities for extreme cases, e.g., scattering ratios approaching 1.0 with optically thick cells, but for most realistic problems the BAMR-DSA method provides effective acceleration. The initial use of a full matrix to store and LU-Decomposition to solve the BAMR-DSA equations has been extended to include Compressed Sparse Row (CSR) storage and a Conjugate Gradient (CG) solver. The CSR and CG methods provide significantly more efficient and faster storage and solution methods.

  8. Configurational forces and variational mesh adaptation in solid dynamics

    NASA Astrophysics Data System (ADS)

    Zielonka, Matias G.

    This thesis is concerned with the exploration and development of a variational finite element mesh adaption framework for non-linear solid dynamics and its conceptual links with the theory of dynamic configurational forces. The distinctive attribute of this methodology is that the underlying variational principle of the problem under study is used to supply both the discretized fields and the mesh on which the discretization is supported. To this end a mixed-multifield version of Hamilton's principle of stationary action and Lagrange-d'Alembert, principle is proposed, a fresh perspective on the theory of dynamic configurational forces is presented, and a unifying variational formulation that generalizes the framework to systems with general dissipative behavior is developed. A mixed finite element formulation with independent spatial interpolations for deformations and velocities and a mixed variational integrator with independent time interpolations for the resulting nodal parameters is constructed. This discretization is supported on a continuously deforming mesh that is not prescribed at the outset but computed as part of the solution. The resulting space-time discretization satisfies exact discrete configurational force balance and exhibits excellent long term global energy stability behavior. The robustness of the mesh adaption framework is assessed and demonstrated with a set of examples and convergence tests.

  9. Projection of Discontinuous Galerkin Variable Distributions During Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Ballesteros, Carlos; Herrmann, Marcus

    2012-11-01

    Adaptive mesh refinement (AMR) methods decrease the computational expense of CFD simulations by increasing the density of solution cells only in areas of the computational domain that are of interest in that particular simulation. In particular, unstructured Cartesian AMR has several advantages over other AMR approaches, as it does not require the creation of numerous guard-cell blocks, neighboring cell lookups become straightforward, and the hexahedral nature of the mesh cells greatly simplifies the refinement and coarsening operations. The h-refinement from this AMR approach can be leveraged by making use of highly-accurate, but computationally costly methods, such as the Discontinuous Galerkin (DG) numerical method. DG methods are capable of high orders of accuracy while retaining stencil locality--a property critical to AMR using unstructured meshes. However, the use of DG methods with AMR requires the use of special flux and projection operators during refinement and coarsening operations in order to retain the high order of accuracy. The flux and projection operators needed for refinement and coarsening of unstructured Cartesian adaptive meshes using Legendre polynomial test functions will be discussed, and their performance will be shown using standard test cases.

  10. Electronic structures in coupled two quantum dots by 3D-mesh Hartree-Fock-Kohn-Sham calculation

    NASA Astrophysics Data System (ADS)

    Matsuse, T.; Hama, T.; Kaihatsu, H.; Toyoda, N.; Takizawa, T.

    To study the electronic structures of quantum dots in the framework of self-interaction-free including three dimensional effects, we adopt the theory of nonlocal effective potential introduced by Kohn and Sham [#!ks65!#]. For utilizing the advantageous point of the real space (3D) mesh method to solve the original nonlinear and nonlocal Hartree-Fock-Kohn-Sham (HFKS)-equation, we introduce a linearization of the equation in the local form by introducing the local Coulomb potentials which depend on explicitly the two single particle states. In practice, for solving the local form HFKS-equation, we use the Car-Parrinello-like relaxation method and the Coulomb potentials are obtained by solving the Poisson equation under proper boundary conditions. Firstly the observed energy gap between triplet- and singlet-states of N = 4 in DBS [#!tarucha96!#] is discussed to reproduce the addition energies and chemical potentials depending the magnetic field. Next the coupling between two-quantum dots in TBS [#!aht97!#] is studied by adding the square barrier between two dots. The spin-degeneracy [#!aht97!#] measured in gate-voltage depending on magnetic field is well reproduced in the limit of small mismatch. Finally, the electronic states in the ring structure are calculated and discussed how the ring size and magnetic field affect to the structures.

  11. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  12. AN ADAPTIVE PARTICLE-MESH GRAVITY SOLVER FOR ENZO

    SciTech Connect

    Passy, Jean-Claude; Bryan, Greg L.

    2014-11-01

    We describe and implement an adaptive particle-mesh algorithm to solve the Poisson equation for grid-based hydrodynamics codes with nested grids. The algorithm is implemented and extensively tested within the astrophysical code Enzo against the multigrid solver available by default. We find that while both algorithms show similar accuracy for smooth mass distributions, the adaptive particle-mesh algorithm is more accurate for the case of point masses, and is generally less noisy. We also demonstrate that the two-body problem can be solved accurately in a configuration with nested grids. In addition, we discuss the effect of subcycling, and demonstrate that evolving all the levels with the same timestep yields even greater precision.

  13. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  14. Boltzmann Solver with Adaptive Mesh in Velocity Space

    SciTech Connect

    Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.

    2011-05-20

    We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.

  15. A fourth order accurate adaptive mesh refinement method forpoisson's equation

    SciTech Connect

    Barad, Michael; Colella, Phillip

    2004-08-20

    We present a block-structured adaptive mesh refinement (AMR) method for computing solutions to Poisson's equation in two and three dimensions. It is based on a conservative, finite-volume formulation of the classical Mehrstellen methods. This is combined with finite volume AMR discretizations to obtain a method that is fourth-order accurate in solution error, and with easily verifiable solvability conditions for Neumann and periodic boundary conditions.

  16. AMR++: Object-Oriented Parallel Adaptive Mesh Refinement

    SciTech Connect

    Quinlan, D.; Philip, B.

    2000-02-02

    Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.

  17. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  18. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  19. Block-structured adaptive mesh refinement - theory, implementation and application

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  20. Mesh adaptation technique for Fourier-domain fluorescence lifetime imaging

    SciTech Connect

    Soloviev, Vadim Y.

    2006-11-15

    A novel adaptive mesh technique in the Fourier domain is introduced for problems in fluorescence lifetime imaging. A dynamical adaptation of the three-dimensional scheme based on the finite volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. Light propagation in the medium is modeled by the telegraph equation, while the lifetime reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the method are demonstrated by image reconstruction of two spherical fluorescent objects embedded in a tissue phantom.

  1. N-Body Code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Yoshii, Yuzuru

    2001-09-01

    We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.

  2. Divergence-Free Adaptive Mesh Refinement for Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.

    2001-12-01

    Several physical systems, such as nonrelativistic and relativistic magnetohydrodynamics (MHD), radiation MHD, electromagnetics, and incompressible hydrodynamics, satisfy Stoke's law type equations for the divergence-free evolution of vector fields. In this paper we present a full-fledged scheme for the second-order accurate, divergence-free evolution of vector fields on an adaptive mesh refinement (AMR) hierarchy. We focus here on adaptive mesh MHD. However, the scheme has applicability to the other systems of equations mentioned above. The scheme is based on making a significant advance in the divergence-free reconstruction of vector fields. In that sense, it complements the earlier work of D. S. Balsara and D. S. Spicer (1999, J. Comput. Phys. 7, 270) where we discussed the divergence-free time-update of vector fields which satisfy Stoke's law type evolution equations. Our advance in divergence-free reconstruction of vector fields is such that it reduces to the total variation diminishing (TVD) property for one-dimensional evolution and yet goes beyond it in multiple dimensions. For that reason, it is extremely suitable for the construction of higher order Godunov schemes for MHD. Both the two-dimensional and three-dimensional reconstruction strategies are developed. A slight extension of the divergence-free reconstruction procedure yields a divergence-free prolongation strategy for prolonging magnetic fields on AMR hierarchies. Divergence-free restriction is also discussed. Because our work is based on an integral formulation, divergence-free restriction and prolongation can be carried out on AMR meshes with any integral refinement ratio, though we specialize the expressions for the most popular situation where the refinement ratio is two. Furthermore, we pay attention to the fact that in order to efficiently evolve the MHD equations on AMR hierarchies, the refined meshes must evolve in time with time steps that are a fraction of their parent mesh's time step

  3. 3D-SoftChip: A Novel Architecture for Next-Generation Adaptive Computing Systems

    NASA Astrophysics Data System (ADS)

    Kim, Chul; Rassau, Alex; Lachowicz, Stefan; Lee, Mike Myung-Ok; Eshraghian, Kamran

    2006-12-01

    This paper introduces a novel architecture for next-generation adaptive computing systems, which we term 3D-SoftChip. The 3D-SoftChip is a 3-dimensional (3D) vertically integrated adaptive computing system combining state-of-the-art processing and 3D interconnection technology. It comprises the vertical integration of two chips (a configurable array processor and an intelligent configurable switch) through an indium bump interconnection array (IBIA). The configurable array processor (CAP) is an array of heterogeneous processing elements (PEs), while the intelligent configurable switch (ICS) comprises a switch block, 32-bit dedicated RISC processor for control, on-chip program/data memory, data frame buffer, along with a direct memory access (DMA) controller. This paper introduces the novel 3D-SoftChip architecture for real-time communication and multimedia signal processing as a next-generation computing system. The paper further describes the advanced HW/SW codesign and verification methodology, including high-level system modeling of the 3D-SoftChip using SystemC, being used to determine the optimum hardware specification in the early design stage.

  4. Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries

    SciTech Connect

    Phillip, B.

    2000-07-24

    Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.

  5. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  6. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  7. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  8. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  9. AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics

    NASA Astrophysics Data System (ADS)

    Plewa, T.; Müller, E.

    2001-08-01

    Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.

  10. Computational relativistic astrophysics with adaptive mesh refinement: Testbeds

    SciTech Connect

    Evans, Edwin; Iyer, Sai; Tao Jian; Wolfmeyer, Randy; Zhang Huimin; Schnetter, Erik; Suen, Wai-Mo

    2005-04-15

    We have carried out numerical simulations of strongly gravitating systems based on the Einstein equations coupled to the relativistic hydrodynamic equations using adaptive mesh refinement (AMR) techniques. We show AMR simulations of NS binary inspiral and coalescence carried out on a workstation having an accuracy equivalent to that of a 1025{sup 3} regular unigrid simulation, which is, to the best of our knowledge, larger than all previous simulations of similar NS systems on supercomputers. We believe the capability opens new possibilities in general relativistic simulations.

  11. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  12. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  13. Modeling and simulating the adaptive electrical properties of stochastic polymeric 3D networks

    NASA Astrophysics Data System (ADS)

    Sigala, R.; Smerieri, A.; Schüz, A.; Camorani, P.; Erokhin, V.

    2013-10-01

    Memristors are passive two-terminal circuit elements that combine resistance and memory. Although in theory memristors are a very promising approach to fabricate hardware with adaptive properties, there are only very few implementations able to show their basic properties. We recently developed stochastic polymeric matrices with a functionality that evidences the formation of self-assembled three-dimensional (3D) networks of memristors. We demonstrated that those networks show the typical hysteretic behavior observed in the ‘one input-one output’ memristive configuration. Interestingly, using different protocols to electrically stimulate the networks, we also observed that their adaptive properties are similar to those present in the nervous system. Here, we model and simulate the electrical properties of these self-assembled polymeric networks of memristors, the topology of which is defined stochastically. First, we show that the model recreates the hysteretic behavior observed in the real experiments. Second, we demonstrate that the networks modeled indeed have a 3D instead of a planar functionality. Finally, we show that the adaptive properties of the networks depend on their connectivity pattern. Our model was able to replicate fundamental qualitative behavior of the real organic 3D memristor networks; yet, through the simulations, we also explored other interesting properties, such as the relation between connectivity patterns and adaptive properties. Our model and simulations represent an interesting tool to understand the very complex behavior of self-assembled memristor networks, which can finally help to predict and formulate hypotheses for future experiments.

  14. Adaptive image warping for hole prevention in 3D view synthesis.

    PubMed

    Plath, Nils; Knorr, Sebastian; Goldmann, Lutz; Sikora, Thomas

    2013-09-01

    Increasing popularity of 3D videos calls for new methods to ease the conversion process of existing monocular video to stereoscopic or multi-view video. A popular way to convert video is given by depth image-based rendering methods, in which a depth map that is associated with an image frame is used to generate a virtual view. Because of the lack of knowledge about the 3D structure of a scene and its corresponding texture, the conversion of 2D video, inevitably, however, leads to holes in the resulting 3D image as a result of newly-exposed areas. The conversion process can be altered such that no holes become visible in the resulting 3D view by superimposing a regular grid over the depth map and deforming it. In this paper, an adaptive image warping approach as an improvement to the regular approach is proposed. The new algorithm exploits the smoothness of a typical depth map to reduce the complexity of the underlying optimization problem that is necessary to find the deformation, which is required to prevent holes. This is achieved by splitting a depth map into blocks of homogeneous depth using quadtrees and running the optimization on the resulting adaptive grid. The results show that this approach leads to a considerable reduction of the computational complexity while maintaining the visual quality of the synthesized views. PMID:23782807

  15. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  16. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  17. Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.

  18. Free Tools and Strategies for the Generation of 3D Finite Element Meshes: Modeling of the Cardiac Structures

    PubMed Central

    Pavarino, E.; Neves, L. A.; Machado, J. M.; de Godoy, M. F.; Shiyou, Y.; Momente, J. C.; Zafalon, G. F. D.; Pinto, A. R.; Valêncio, C. R.

    2013-01-01

    The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. PMID:23762031

  19. Free Tools and Strategies for the Generation of 3D Finite Element Meshes: Modeling of the Cardiac Structures.

    PubMed

    Pavarino, E; Neves, L A; Machado, J M; de Godoy, M F; Shiyou, Y; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R

    2013-01-01

    The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. PMID:23762031

  20. A high order Discontinuous Galerkin - Fourier incompressible 3D Navier-Stokes solver with rotating sliding meshes

    NASA Astrophysics Data System (ADS)

    Ferrer, Esteban; Willden, Richard H. J.

    2012-08-01

    We present the development of a sliding mesh capability for an unsteady high order (order ⩾ 3) h/p Discontinuous Galerkin solver for the three-dimensional incompressible Navier-Stokes equations. A high order sliding mesh method is developed and implemented for flow simulation with relative rotational motion of an inner mesh with respect to an outer static mesh, through the use of curved boundary elements and mixed triangular-quadrilateral meshes. A second order stiffly stable method is used to discretise in time the Arbitrary Lagrangian-Eulerian form of the incompressible Navier-Stokes equations. Spatial discretisation is provided by the Symmetric Interior Penalty Galerkin formulation with modal basis functions in the x-y plane, allowing hanging nodes and sliding meshes without the requirement to use mortar type techniques. Spatial discretisation in the z-direction is provided by a purely spectral method that uses Fourier series and allows computation of spanwise periodic three-dimensional flows. The developed solver is shown to provide high order solutions, second order in time convergence rates and spectral convergence when solving the incompressible Navier-Stokes equations on meshes where fixed and rotating elements coexist. In addition, an exact implementation of the no-slip boundary condition is included for curved edges; circular arcs and NACA 4-digit airfoils, where analytic expressions for the geometry are used to compute the required metrics. The solver capabilities are tested for a number of two dimensional problems governed by the incompressible Navier-Stokes equations on static and rotating meshes: the Taylor vortex problem, a static and rotating symmetric NACA0015 airfoil and flows through three bladed cross-flow turbines. In addition, three dimensional flow solutions are demonstrated for a three bladed cross-flow turbine and a circular cylinder shadowed by a pitching NACA0012 airfoil.

  1. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  2. An adaptive grid-based all hexahedral meshing algorithm based on 2-refinement.

    SciTech Connect

    Edgel, Jared; Benzley, Steven E.; Owen, Steven James

    2010-08-01

    Most adaptive mesh generation algorithms employ a 3-refinement method. This method, although easy to employ, provides a mesh that is often too coarse in some areas and over refined in other areas. Because this method generates 27 new hexes in place of a single hex, there is little control on mesh density. This paper presents an adaptive all-hexahedral grid-based meshing algorithm that employs a 2-refinement method. 2-refinement is based on dividing the hex to be refined into eight new hexes. This method allows a greater control on mesh density when compared to a 3-refinement procedure. This adaptive all-hexahedral meshing algorithm provides a mesh that is efficient for analysis by providing a high element density in specific locations and a reduced mesh density in other areas. In addition, this tool can be effectively used for inside-out hexahedral grid based schemes, using Cartesian structured grids for the base mesh, which have shown great promise in accommodating automatic all-hexahedral algorithms. This adaptive all-hexahedral grid-based meshing algorithm employs a 2-refinement insertion method. This allows greater control on mesh density when compared to 3-refinement methods. This algorithm uses a two layer transition zone to increase element quality and keeps transitions from lower to higher mesh densities smooth. Templates were introduced to allow both convex and concave refinement.

  3. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  4. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  5. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation

    PubMed Central

    Dione, Ibrahima; Briffard, Thomas; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674

  6. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.

    PubMed

    Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.

  7. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.

    PubMed

    Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674

  8. A 3D agglomeration multigrid solver for the Reynolds-averaged Navier-Stokes equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Marvriplis, D. J.; Venkatakrishnan, V.

    1995-01-01

    An agglomeration multigrid strategy is developed and implemented for the solution of three-dimensional steady viscous flows. The method enables convergence acceleration with minimal additional memory overheads, and is completely automated, in that it can deal with grids of arbitrary construction. The multigrid technique is validated by comparing the delivered convergence rates with those obtained by a previously developed overset-mesh multigrid approach, and by demonstrating grid independent convergence rates for aerodynamic problems on very large grids. Prospects for further increases in multigrid efficiency for high-Reynolds number viscous flows on highly stretched meshes are discussed.

  9. A Spectral Adaptive Mesh Refinement Method for the Burgers equation

    NASA Astrophysics Data System (ADS)

    Nasr Azadani, Leila; Staples, Anne

    2013-03-01

    Adaptive mesh refinement (AMR) is a powerful technique in computational fluid dynamics (CFD). Many CFD problems have a wide range of scales which vary with time and space. In order to resolve all the scales numerically, high grid resolutions are required. The smaller the scales the higher the resolutions should be. However, small scales are usually formed in a small portion of the domain or in a special period of time. AMR is an efficient method to solve these types of problems, allowing high grid resolutions where and when they are needed and minimizing memory and CPU time. Here we formulate a spectral version of AMR in order to accelerate simulations of a 1D model for isotropic homogenous turbulence, the Burgers equation, as a first test of this method. Using pseudo spectral methods, we applied AMR in Fourier space. The spectral AMR (SAMR) method we present here is applied to the Burgers equation and the results are compared with the results obtained using standard solution methods performed using a fine mesh.

  10. Adaptive mesh generation for edge-element finite element method

    NASA Astrophysics Data System (ADS)

    Tsuboi, Hajime; Gyimothy, Szabolcs

    2001-06-01

    An adaptive mesh generation method for two- and three-dimensional finite element methods using edge elements is proposed. Since the tangential component continuity is preserved when using edge elements, the strategy of creating new nodes is based on evaluation of the normal component of the magnetic vector potential across element interfaces. The evaluation is performed at the middle point of edge of a triangular element for two-dimensional problems or at the gravity center of triangular surface of a tetrahedral element for three-dimensional problems. At the boundary of two elements, the error estimator is the ratio of the normal component discontinuity to the maximum value of the potential in the same material. One or more nodes are set at the middle points of the edges according to the value of the estimator as well as the subdivision of elements where new nodes have been created. A final mesh will be obtained after several iterations. Some computation results of two- and three-dimensional problems using the proposed method are shown.

  11. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  12. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  13. Transient 3D heat flow analysis for integrated circuit devices using the transmission line matrix method on a quad tree mesh

    NASA Astrophysics Data System (ADS)

    Smy, T.; Walkey, D.; Dew, S. K.

    2001-07-01

    This paper presents a 3D transmission line matrix (TLM) implementation for the solution of transient heat flow in integrated semiconductor devices. The implementation uses a rectangular discontinuous mesh to allow for local mesh refinement. This approach is based on a quad tree meshing technique which can have a complex geometry using blocks of varying sizes. Each such block can have a maximum of two adjacent blocks on any vertical side and a maximum of four blocks on the top or bottom. The TLM implementation is based on a physical extraction of a resistance and capacitance network and then the creation of the appropriate TLM matrix. The formulation allows for temperature-dependent material parameters and a non-uniform time stepping. The simulator is first tested using a 2D example of a heat source in a rectangular region. Using this example the numerical error is determined and found to be less than 0.4%. Next, non-linearities are included, and a number of non-uniform time stepping algorithms are tested. Then, a 3D problem is also compared to an analytical solution and again the error is very small. Finally, an example of a full solution of heat flow in a realistic Si trench device is presented.

  14. ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others

    2014-04-01

    This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.

  15. Adaptive Mesh Refinement in Computational Astrophysics -- Methods and Applications

    NASA Astrophysics Data System (ADS)

    Balsara, D.

    2001-12-01

    The advent of robust, reliable and accurate higher order Godunov schemes for many of the systems of equations of interest in computational astrophysics has made it important to understand how to solve them in multi-scale fashion. This is so because the physics associated with astrophysical phenomena evolves in multi-scale fashion and we wish to arrive at a multi-scale simulational capability to represent the physics. Because astrophysical systems have magnetic fields, multi-scale magnetohydrodynamics (MHD) is of especial interest. In this paper we first discuss general issues in adaptive mesh refinement (AMR). We then focus on the important issues in carrying out divergence-free AMR-MHD and catalogue the progress we have made in that area. We show that AMR methods lend themselves to easy parallelization. We then discuss applications of the RIEMANN framework for AMR-MHD to problems in computational astophysics.

  16. Structured adaptive mesh refinement on the connection machine

    SciTech Connect

    Berger, M.J. . Courant Inst. of Mathematical Sciences); Saltzman, J.S. )

    1993-01-01

    Adaptive mesh refinement has proven itself to be a useful tool in a large collection of applications. By refining only a small portion of the computational domain, computational savings of up to a factor of 80 in 3 dimensional calculations have been obtained on serial machines. A natural question is, can this algorithm be used on massively parallel machines and still achieve the same efficiencies We have designed a data layout scheme for mapping grid points to processors that preserves locality and minimizes global communication for the CM-200. The effect of the data layout scheme is that at the finest level nearby grid points from adjacent grids in physical space are in adjacent memory locations. Furthermore, coarse grid points are arranged in memory to be near their associated fine grid points. We show applications of the algorithm to inviscid compressible fluid flow in two space dimensions.

  17. Finite element adaptive mesh analysis using a cluster of workstations

    NASA Astrophysics Data System (ADS)

    Wang, K. P.; Bruch, J. C., Jr.

    1998-01-01

    Parallel computation on clusters of workstations is becoming one of the major trends in the study of parallel computations, because of their high computing speed, cost effectiveness and scalability. This paper presents studies of using a cluster of workstations for the finite element adaptive mesh analysis of a free surface seepage problem. A parallel algorithm proven to be simple to implement and efficient is used to perform the analysis. A network of workstations is used as the hardware of a parallel system. Two parallel software packages, P4 and PVM (parallel virtual machine), are used to handle communications among networked workstations. Computational issues to be discussed are domain decomposition, load balancing, and communication time.

  18. Production-quality Tools for Adaptive Mesh RefinementVisualization

    SciTech Connect

    Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes

    2007-10-25

    Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.

  19. Efficient Unstructured Cartesian/Immersed-Boundary Method with Local Mesh Refinement to Simulate Flows in Complex 3D Geometries

    NASA Astrophysics Data System (ADS)

    de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit

    2008-11-01

    Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.

  20. Unstructured and adaptive mesh generation for high Reynolds number viscous flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1991-01-01

    A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.

  1. Novel adaptation of the demodulation technology for gear damage detection to variable amplitudes of mesh harmonics

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2011-04-01

    In this paper, a novel adaptive demodulation technique including a new diagnostic feature is proposed for gear diagnosis in conditions of variable amplitudes of the mesh harmonics. This vibration technique employs the time synchronous average (TSA) of vibration signals. The new adaptive diagnostic feature is defined as the ratio of the sum of the sideband components of the envelope spectrum of a mesh harmonic to the measured power of the mesh harmonic. The proposed adaptation of the technique is justified theoretically and experimentally by the high level of the positive covariance between amplitudes of the mesh harmonics and the sidebands in conditions of variable amplitudes of the mesh harmonics. It is shown that the adaptive demodulation technique preserves effectiveness of local fault detection of gears operating in conditions of variable mesh amplitudes.

  2. Adaptive Multiresolution or Adaptive Mesh Refinement? A Case Study for 2D Euler Equations

    SciTech Connect

    Deiterding, Ralf; Domingues, Margarete O.; Gomes, Sonia M.; Roussel, Olivier; Schneider, Kai

    2009-01-01

    We present adaptive multiresolution (MR) computations of the two-dimensional compressible Euler equations for a classical Riemann problem. The results are then compared with respect to accuracy and computational efficiency, in terms of CPU time and memory requirements, with the corresponding finite volume scheme on a regular grid. For the same test-case, we also perform computations using adaptive mesh refinement (AMR) imposing similar accuracy requirements. The results thus obtained are compared in terms of computational overhead and compression of the computational grid, using in addition either local or global time stepping strategies. We preliminarily conclude that the multiresolution techniques yield improved memory compression and gain in CPU time with respect to the adaptive mesh refinement method.

  3. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockhard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  4. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    SciTech Connect

    Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  5. Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Chacón, Luis; Pernice, Michael

    2008-10-01

    An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.

  6. New high quality adaptive mesh generator utilized in modelling plasma streamer propagation at atmospheric pressures

    NASA Astrophysics Data System (ADS)

    Papadakis, A. P.; Georghiou, G. E.; Metaxas, A. C.

    2008-12-01

    A new adaptive mesh generator has been developed and used in the analysis of high-pressure gas discharges, such as avalanches and streamers, reducing computational times and computer memory needs significantly. The new adaptive mesh generator developed, uses normalized error indicators, varying from 0 to 1, to guarantee optimal mesh resolution for all carriers involved in the analysis. Furthermore, it uses h- and r-refinement techniques such as mesh jiggling, edge swapping and node addition/removal to develop an element quality improvement algorithm that improves the mesh quality significantly and a fast and accurate algorithm for interpolation between meshes. Finally, the mesh generator is applied in the characterization of the transition from a single electron to the avalanche and streamer discharges in high-voltage, high-pressure gas discharges for dc 1 mm gaps, RF 1 cm point-plane gaps and parallel-plate 40 MHz configurations, in ambient atmospheric air.

  7. Analysis of hypersonic aircraft inlets using flow adaptive mesh algorithms

    NASA Astrophysics Data System (ADS)

    Neaves, Michael Dean

    The numerical investigation into the dynamics of unsteady inlet flowfields is applied to a three-dimensional scramjet inlet-isolator-diffuser geometry designed for hypersonic type applications. The Reynolds-Averaged Navier-Stokes equations are integrated in time using a subiterating, time-accurate implicit algorithm. Inviscid fluxes are calculated using the Low Diffusion Flux Splitting Scheme of Edwards. A modified version of the dynamic solution-adaptive point movement algorithm of Benson and McRae is used in a coupled mode to dynamically resolve the features of the flow by enhancing the spatial accuracy of the simulations. The unsteady mesh terms are incorporated into the flow solver via the inviscid fluxes. The dynamic solution-adaptive grid algorithm of Benson and McRae is modified to improve orthogonality at the boundaries to ensure accurate application of boundary conditions and properly resolve turbulent boundary layers. Shock tube simulations are performed to ascertain the effectiveness of the algorithm for unsteady flow situations on fixed and moving grids. Unstarts due to a combustor and freestream angle of attack perturbations are simulated in a three-dimensional inlet-isolator-diffuser configuration.

  8. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  9. Adaptive noise suppression technique for dense 3D point cloud reconstructions from monocular vision

    NASA Astrophysics Data System (ADS)

    Diskin, Yakov; Asari, Vijayan K.

    2012-10-01

    Mobile vision-based autonomous vehicles use video frames from multiple angles to construct a 3D model of their environment. In this paper, we present a post-processing adaptive noise suppression technique to enhance the quality of the computed 3D model. Our near real-time reconstruction algorithm uses each pair of frames to compute the disparities of tracked feature points to translate the distance a feature has traveled within the frame in pixels into real world depth values. As a result these tracked feature points are plotted to form a dense and colorful point cloud. Due to the inevitable small vibrations in the camera and the mismatches within the feature tracking algorithm, the point cloud model contains a significant amount of misplaced points appearing as noise. The proposed noise suppression technique utilizes the spatial information of each point to unify points of similar texture and color into objects while simultaneously removing noise dissociated with any nearby objects. The noise filter combines all the points of similar depth into 2D layers throughout the point cloud model. By applying erosion and dilation techniques we are able to eliminate the unwanted floating points while retaining points of larger objects. To reverse the compression process, we transform the 2D layer back into the 3D model allowing points to return to their original position without the attached noise components. We evaluate the resulting noiseless point cloud by utilizing an unmanned ground vehicle to perform obstacle avoidance tasks. The contribution of the noise suppression technique is measured by evaluating the accuracy of the 3D reconstruction.

  10. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  11. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  12. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and

  13. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  14. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  15. Study of the counting efficiency of a WBC setup by using a computational 3D human body library in sitting position based on polygonal mesh surfaces.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F

    2014-04-01

    A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured.

  16. Study of the counting efficiency of a WBC setup by using a computational 3D human body library in sitting position based on polygonal mesh surfaces.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F

    2014-04-01

    A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured. PMID:24562069

  17. Single-pass GPU-raycasting for structured adaptive mesh refinement data

    NASA Astrophysics Data System (ADS)

    Kaehler, Ralf; Abel, Tom

    2013-01-01

    Structured Adaptive Mesh Refinement (SAMR) is a popular numerical technique to study processes with high spatial and temporal dynamic range. It reduces computational requirements by adapting the lattice on which the underlying differential equations are solved to most efficiently represent the solution. Particularly in astrophysics and cosmology such simulations now can capture spatial scales ten orders of magnitude apart and more. The irregular locations and extensions of the refined regions in the SAMR scheme and the fact that different resolution levels partially overlap, poses a challenge for GPU-based direct volume rendering methods. kD-trees have proven to be advantageous to subdivide the data domain into non-overlapping blocks of equally sized cells, optimal for the texture units of current graphics hardware, but previous GPU-supported raycasting approaches for SAMR data using this data structure required a separate rendering pass for each node, preventing the application of many advanced lighting schemes that require simultaneous access to more than one block of cells. In this paper we present the first single-pass GPU-raycasting algorithm for SAMR data that is based on a kD-tree. The tree is efficiently encoded by a set of 3D-textures, which allows to adaptively sample complete rays entirely on the GPU without any CPU interaction. We discuss two different data storage strategies to access the grid data on the GPU and apply them to several datasets to prove the benefits of the proposed method.

  18. CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM

    SciTech Connect

    Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  19. An adaptive mesh refinement algorithm for the discrete ordinates method

    SciTech Connect

    Jessee, J.P.; Fiveland, W.A.; Howell, L.H.; Colella, P.; Pember, R.B.

    1996-03-01

    The discrete ordinates form of the radiative transport equation (RTE) is spatially discretized and solved using an adaptive mesh refinement (AMR) algorithm. This technique permits the local grid refinement to minimize spatial discretization error of the RTE. An error estimator is applied to define regions for local grid refinement; overlapping refined grids are recursively placed in these regions; and the RTE is then solved over the entire domain. The procedure continues until the spatial discretization error has been reduced to a sufficient level. The following aspects of the algorithm are discussed: error estimation, grid generation, communication between refined levels, and solution sequencing. This initial formulation employs the step scheme, and is valid for absorbing and isotopically scattering media in two-dimensional enclosures. The utility of the algorithm is tested by comparing the convergence characteristics and accuracy to those of the standard single-grid algorithm for several benchmark cases. The AMR algorithm provides a reduction in memory requirements and maintains the convergence characteristics of the standard single-grid algorithm; however, the cases illustrate that efficiency gains of the AMR algorithm will not be fully realized until three-dimensional geometries are considered.

  20. Numerical study of Taylor bubbles with adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry

    2014-11-01

    The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  1. Visualizing Geophysical Flow Problems with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Sevre, E. O.; Yuen, D. A.; George, D. L.; Lee, S.

    2011-12-01

    Adaptive Mesh Refinement (AMR) is a technique used in software to decompose a computational domain based on the level of refinement necessary for spatial and temporal calculations. Comparing AMR runs to uniform grids allows for an unbounded gain in computational time. In this paper we will look at techniques for visualizing tsunami simulations that were run with AMR using the GeoClaw [Berger2011-1, Berger2011-2] software. Due to the computational efficiency of AMR we have decided to look into techniques for visualization of AMR data. By having good visualization tools for geoscientists more time can be spent interpreting results and analyzing data. Good visualization tools can be adapted easily to work with a variety of output formats, and the goal of this work is to provide a foundation for geoscientists to work with. In the past year GeoClaw has been used to model the 2011 Tohoku tsunami originating off the coast of Sendai Japan and delivering catastrophic damage to the Fukushima power plant. The aftermath of this single geologic event is still making headlines 4 months after the fact [Fackler2011]. GeoClaw utilizes the shallow water equations to model a variety of flows that range from tsunami to floods to landslides and debris flows [George2011]. With the advanced computations provided by AMR it is important for researchers to visualize and understand ways that are meaningful to both scientists and civilians affected by the potential outcomes of the computation. Special visualization techniques can be used to visualize and look at data generated with AMR. By incorporating these techniques into their software geoscientists will be able to harness powerful computational tools, such as GeoClaw, while also maintaining an informative view of their data.

  2. 3D design and electric simulation of a silicon drift detector using a spiral biasing adapter

    NASA Astrophysics Data System (ADS)

    Li, Yu-yun; Xiong, Bo; Li, Zheng

    2016-09-01

    The detector system of combining a spiral biasing adapter (SBA) with a silicon drift detector (SBA-SDD) is largely different from the traditional silicon drift detector (SDD), including the spiral SDD. It has a spiral biasing adapter of the same design as a traditional spiral SDD and an SDD with concentric rings having the same radius. Compared with the traditional spiral SDD, the SBA-SDD separates the spiral's functions of biasing adapter and the p-n junction definition. In this paper, the SBA-SDD is simulated using a Sentaurus TCAD tool, which is a full 3D device simulation tool. The simulated electric characteristics include electric potential, electric field, electron concentration, and single event effect. Because of the special design of the SBA-SDD, the SBA can generate an optimum drift electric field in the SDD, comparable with the conventional spiral SDD, while the SDD can be designed with concentric rings to reduce surface area. Also the current and heat generated in the SBA are separated from the SDD. To study the single event response, we simulated the induced current caused by incident heavy ions (20 and 50 μm penetration length) with different linear energy transfer (LET). The SBA-SDD can be used just like a conventional SDD, such as X-ray detector for energy spectroscopy and imaging, etc.

  3. 3D Continuum Radiative Transfer. An adaptive grid construction algorithm based on the Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Niccolini, G.; Alcolea, J.

    Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).

  4. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  5. Electrochemical incineration of indigo. A comparative study between 2D (plate) and 3D (mesh) BDD anodes fitted into a filter-press reactor.

    PubMed

    Nava, José L; Sirés, Ignasi; Brillas, Enric

    2014-01-01

    This paper compares the performance of 2D (plate) and 3D (mesh) boron-doped diamond (BDD) electrodes, fitted into a filter-press reactor, during the electrochemical incineration of indigo textile dye as a model organic compound in chloride medium. The electrolyses were carried out in the FM01-LC reactor at mean fluid velocities between 0.9 ≤ u ≤ 10.4 and 1.2 ≤ u ≤ 13.9 cm s(-1) for the 2D BDD and the 3D BDD electrodes, respectively, at current densities of 5.63 and 15 mA cm(-2). The oxidation of the organic matter was promoted, on the one hand, via the physisorbed hydroxyl radicals (BDD(·OH)) formed from water oxidation at the BDD surface and, on the other hand, via active chlorine formed from the oxidation of chloride ions on BDD. The performance of 2D BDD and 3D BDD electrodes in terms of current efficiency, energy consumption, and charge passage during the treatments is discussed.

  6. Electrochemical incineration of indigo. A comparative study between 2D (plate) and 3D (mesh) BDD anodes fitted into a filter-press reactor.

    PubMed

    Nava, José L; Sirés, Ignasi; Brillas, Enric

    2014-01-01

    This paper compares the performance of 2D (plate) and 3D (mesh) boron-doped diamond (BDD) electrodes, fitted into a filter-press reactor, during the electrochemical incineration of indigo textile dye as a model organic compound in chloride medium. The electrolyses were carried out in the FM01-LC reactor at mean fluid velocities between 0.9 ≤ u ≤ 10.4 and 1.2 ≤ u ≤ 13.9 cm s(-1) for the 2D BDD and the 3D BDD electrodes, respectively, at current densities of 5.63 and 15 mA cm(-2). The oxidation of the organic matter was promoted, on the one hand, via the physisorbed hydroxyl radicals (BDD(·OH)) formed from water oxidation at the BDD surface and, on the other hand, via active chlorine formed from the oxidation of chloride ions on BDD. The performance of 2D BDD and 3D BDD electrodes in terms of current efficiency, energy consumption, and charge passage during the treatments is discussed. PMID:24737017

  7. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  8. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  9. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  10. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  11. Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics

    SciTech Connect

    Lomov, I; Pember, R; Greenough, J; Liu, B

    2005-10-18

    We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized to remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.

  12. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  13. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  14. 3D segmentation of masses in DCE-MRI images using FCM and adaptive MRF

    NASA Astrophysics Data System (ADS)

    Zhang, Chengjie; Li, Lihua

    2014-03-01

    Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging modality for the detection of breast cancer. Automated segmentation of breast lesions in DCE-MRI images is challenging due to inherent signal-to-noise ratios and high inter-patient variability. A novel 3D segmentation method based on FCM and MRF is proposed in this study. In this method, a MRI image is segmented by spatial FCM, firstly. And then MRF segmentation is conducted to refine the result. We combined with the 3D information of lesion in the MRF segmentation process by using segmentation result of contiguous slices to constraint the slice segmentation. At the same time, a membership matrix of FCM segmentation result is used for adaptive adjustment of Markov parameters in MRF segmentation process. The proposed method was applied for lesion segmentation on 145 breast DCE-MRI examinations (86 malignant and 59 benign cases). An evaluation of segmentation was taken using the traditional overlap rate method between the segmented region and hand-drawing ground truth. The average overlap rates for benign and malignant lesions are 0.764 and 0.755 respectively. Then we extracted five features based on the segmentation region, and used an artificial neural network (ANN) to classify between malignant and benign cases. The ANN had a classification performance measured by the area under the ROC curve of AUC=0.73. The positive and negative predictive values were 0.86 and 0.58, respectively. The results demonstrate the proposed method not only achieves a better segmentation performance in accuracy also has a reasonable classification performance.

  15. Importance of dynamic mesh adaptivity for simulation of viscous fingering in porous media

    NASA Astrophysics Data System (ADS)

    Mostaghimi, P.; Jackson, M.; Pain, C.; Gorman, G.

    2014-12-01

    Viscous fingering is a major concern in many natural and engineered processes such as water flooding of heavy-oil reservoirs. Common reservoir simulators employ low-order finite volume/difference methods on structured grids to resolve this phenomenon. However, their approach suffers from a significant numerical dispersion error along the fingering patterns due to insufficient mesh resolution and smears out some important features of the flow. We propose use of an unstructured control volume finite element method for simulation of viscous fingering in porous media. Our approach is equipped with anisotropic mesh adaptivity where the mesh resolution is optimized based on the evolving features of flow. The adaptive algorithm uses a metric tensor field based on solution error estimates to locally control the size and shape of elements in the metric. We resolve the viscous fingering patterns accurately and reduce the numerical dispersion error significantly. The mesh optimization, generates an unstructured coarse mesh in other regions of the computational domain which significantly decreases the computational cost. The effect of grid resolution on the resolved fingers is thoroughly investigated. We analyze the computational cost of mesh adaptivty on unstructured mesh and compare it with common finite volume methods. The results of this study suggests that mesh adaptivity is an efficient and accurate approach for resolving complex behaviors and instabilities of flow in porous media such as viscous fingering.

  16. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems.

    PubMed

    Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y

    2006-12-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  17. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  18. Adaptive meshing technique applied to an orthopaedic finite element contact problem.

    PubMed

    Roarty, Colleen M; Grosland, Nicole M

    2004-01-01

    Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.

  19. Finite-volume goal-oriented mesh adaptation for aerodynamics using functional derivative with respect to nodal coordinates

    NASA Astrophysics Data System (ADS)

    Todarello, Giovanni; Vonck, Floris; Bourasseau, Sébastien; Peter, Jacques; Désidéri, Jean-Antoine

    2016-05-01

    A new goal-oriented mesh adaptation method for finite volume/finite difference schemes is extended from the structured mesh framework to a more suitable setting for adaptation of unstructured meshes. The method is based on the total derivative of the goal with respect to volume mesh nodes that is computable after the solution of the goal discrete adjoint equation. The asymptotic behaviour of this derivative is assessed on regularly refined unstructured meshes. A local refinement criterion is derived from the requirement of limiting the first order change in the goal that an admissible node displacement may cause. Mesh adaptations are then carried out for classical test cases of 2D Euler flows. Efficiency and local density of the adapted meshes are presented. They are compared with those obtained with a more classical mesh adaptation method in the framework of finite volume/finite difference schemes [46]. Results are very close although the present method only makes usage of the current grid.

  20. Capabilities of wind tunnels with two-adaptive walls to minimize boundary interference in 3-D model testing

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer; Lee, Edwin E., Jr.

    1989-01-01

    An initial wind tunnel test was made to validate a new wall adaptation method for 3-D models in test sections with two adaptive walls. First part of the adaptation strategy is an on-line assessment of wall interference at the model position. The wall induced blockage was very small at all test conditions. Lift interference occurred at higher angles of attack with the walls set aerodynamically straight. The adaptation of the top and bottom tunnel walls is aimed at achieving a correctable flow condition. The blockage was virtually zero throughout the wing planform after the wall adjustment. The lift curve measured with the walls adapted agreed very well with interference free data for Mach 0.7, regardless of the vertical position of the wing in the test section. The 2-D wall adaptation can significantly improve the correctability of 3-D model data. Nevertheless, residual spanwise variations of wall interference are inevitable.

  1. Development of a scalable gas-dynamics solver with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Korkut, Burak

    There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be

  2. Drag Prediction for the DLR-F6 Wing/Body and DPW Wing using CFL3D and OVERFLOW Overset Mesh

    NASA Technical Reports Server (NTRS)

    Sclanfani, Anthony J.; Vassberg, John C.; Harrison, Neal A.; DeHaan, Mark A.; Rumsey, Christopher L.; Rivers, S. Melissa; Morrison, Joseph H.

    2007-01-01

    A series of overset grids was generated in response to the 3rd AIAA CFD Drag Prediction Workshop (DPW-III) which preceded the 25th Applied Aerodynamics Conference in June 2006. DPW-III focused on accurate drag prediction for wing/body and wing-alone configurations. The grid series built for each configuration consists of a coarse, medium, fine, and extra-fine mesh. The medium mesh is first constructed using the current state of best practices for overset grid generation. The medium mesh is then coarsened and enhanced by applying a factor of 1.5 to each (I,J,K) dimension. The resulting set of parametrically equivalent grids increase in size by a factor of roughly 3.5 from one level to the next denser level. CFD simulations were performed on the overset grids using two different RANS flow solvers: CFL3D and OVERFLOW. The results were post-processed using Richardson extrapolation to approximate grid converged values of lift, drag, pitching moment, and angle-of-attack at the design condition. This technique appears to work well if the solution does not contain large regions of separated flow (similar to that seen n the DLR-F6 results) and appropriate grid densities are selected. The extra-fine grid data helped to establish asymptotic grid convergence for both the OVERFLOW FX2B wing/body results and the OVERFLOW DPW-W1/W2 wing-alone results. More CFL3D data is needed to establish grid convergence trends. The medium grid was utilized beyond the grid convergence study by running each configuration at several angles-of-attack so drag polars and lift/pitching moment curves could be evaluated. The alpha sweep results are used to compare data across configurations as well as across flow solvers. With the exception of the wing/body drag polar, the two codes compare well qualitatively showing consistent incremental trends and similar wing pressure comparisons.

  3. Drag Prediction for the DLR-F4 Wing/Body using OVERFLOW and CFL3D on an Overset Mesh

    NASA Technical Reports Server (NTRS)

    Vassberg, John C.; Buning, Pieter G.; Rumsey, Christopher L.

    2002-01-01

    This paper reviews the importance of numerical drag prediction in an aircraft design environment. A chronicle of collaborations between the authors and colleagues is discussed. This retrospective provides a road-map which illustrates some of the actions taken in the past seven years in pursuit of accurate drag prediction. The advances made possible through these collaborations have changed the manner in which business is conducted during the design of all-new aircraft. The subject of this study is the DLR-F4 wing/body transonic model. Specifically, the work conducted herein was in support of the 1st CFD Drag Prediction Workshop, which was held in conjunction with the 19th Applied Aerodynamics Conference in Anaheim, CA during June, 2001. Comprehensive sets of OVERFLOW simulations were independently performed by several users on a variety of computational platforms. CFL3D was used on a limited basis for additional comparison on the same overset mesh. Drag polars based on this database were constructed with a CFD-to-Test correction applied and compared with test data from three facilities. These comparisons show that the predicted drag polars fall inside the scatter band of the test data, at least for pre-buffet conditions. This places the corrected drag levels within 1% of the averaged experimental values. At the design point, the OVERFLOW and CFL3D drag predictions are within 1-2% of each other. In addition, drag-rise characteristics and a boundary of drag-divergence Mach number are presented.

  4. Star formation with adaptive mesh refinement and magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Collins, David C.

    2009-01-01

    In this thesis, we develop an adaptive mesh refinement (AMR) code including magnetic fields, and use it to perform high resolution simulations of magnetized molecular clouds. The purpose of these simulations is to study present day star formation in the presence of turbulence and magnetic fields. We first present MHDEnzo, the extension of the cosmology and astrophysics code Enzo to include the effects magnetic fields. We use a higher order Godunov Riemann solver for the computation of interface fluxes; constrained transport to compute the electric field from those interface fluxes, which advances the induction equation in a divergence free manner; divergence free reconstruction technique to interpolate the magnetic fields to fine grids; operator splitting to include gravity and cosmological expansion. We present a series of test problems to demonstrate the quality of solution achieved. Additionally, we present several other solvers that were developed along the way. Finally we present the results from several AMR simulations that study isothermal turbulence in the presence of magnetic fields and self gravity. Ten simulations with initial Mach number 8.9 were studied varying several parameters; virial parameter a from 0.52 to 3.1; whether they were continuously stirred or allowed to decay; and the number of refinement levels (4 or 6). Measurements of the density probability density function (PDF) were made, showing both the expected log normal distribution and an additional power law. Measurements of the line of sight magnetic field vs. column density are done, giving excellent agreement with recent observations. The line width vs. size relationship is measured and compared with good agreement to observations, reproducing both turbulent and collapse signatures The core mass distribution is measured and agrees well with observations of Serpens and Perseus core samples, but the power-law distribution in Ophiuchus is not reproduced by our simulations. Finally we

  5. Novel multiresolution mammographic density segmentation using pseudo 3D features and adaptive cluster merging

    NASA Astrophysics Data System (ADS)

    He, Wenda; Juette, Arne; Denton, Erica R. E.; Zwiggelaar, Reyer

    2015-03-01

    Breast cancer is the most frequently diagnosed cancer in women. Early detection, precise identification of women at risk, and application of appropriate disease prevention measures are by far the most effective ways to overcome the disease. Successful mammographic density segmentation is a key aspect in deriving correct tissue composition, ensuring an accurate mammographic risk assessment. However, mammographic densities have not yet been fully incorporated with non-image based risk prediction models, (e.g. the Gail and the Tyrer-Cuzick model), because of unreliable segmentation consistency and accuracy. This paper presents a novel multiresolution mammographic density segmentation, a concept of stack representation is proposed, and 3D texture features were extracted by adapting techniques based on classic 2D first-order statistics. An unsupervised clustering technique was employed to achieve mammographic segmentation, in which two improvements were made; 1) consistent segmentation by incorporating an optimal centroids initialisation step, and 2) significantly reduced the number of missegmentation by using an adaptive cluster merging technique. A set of full field digital mammograms was used in the evaluation. Visual assessment indicated substantial improvement on segmented anatomical structures and tissue specific areas, especially in low mammographic density categories. The developed method demonstrated an ability to improve the quality of mammographic segmentation via clustering, and results indicated an improvement of 26% in segmented image with good quality when compared with the standard clustering approach. This in turn can be found useful in early breast cancer detection, risk-stratified screening, and aiding radiologists in the process of decision making prior to surgery and/or treatment.

  6. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  7. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    PubMed

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions.

  8. Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.

    2014-05-01

    Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting

  9. Standard and goal-oriented adaptive mesh refinement applied to radiation transport on 2D unstructured triangular meshes

    SciTech Connect

    Yaqi Wang; Jean C. Ragusa

    2011-02-01

    Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.

  10. Automatic Mesh Adaptivity for Hybrid Monte Carlo/Deterministic Neutronics Modeling of Fusion Energy Systems

    SciTech Connect

    Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E

    2013-01-01

    Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.

  11. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  12. An Arbitrary Lagrangian-Eulerian Method with Local Adaptive Mesh Refinement for Modeling Compressible Flow

    NASA Astrophysics Data System (ADS)

    Anderson, Robert; Pember, Richard; Elliott, Noah

    2001-11-01

    We present a method, ALE-AMR, for modeling unsteady compressible flow that combines a staggered grid arbitrary Lagrangian-Eulerian (ALE) scheme with structured local adaptive mesh refinement (AMR). The ALE method is a three step scheme on a staggered grid of quadrilateral cells: Lagrangian advance, mesh relaxation, and remap. The AMR scheme uses a mesh hierarchy that is dynamic in time and is composed of nested structured grids of varying resolution. The integration algorithm on the hierarchy is a recursive procedure in which the coarse grids are advanced a single time step, the fine grids are advanced to the same time, and the coarse and fine grid solutions are synchronized. The novel details of ALE-AMR are primarily motivated by the need to reconcile and extend AMR techniques typically employed for stationary rectangular meshes with cell-centered quantities to the moving quadrilateral meshes with staggered quantities used in the ALE scheme. Solutions of several test problems are discussed.

  13. Higher-order schemes with CIP method and adaptive Soroban grid towards mesh-free scheme

    NASA Astrophysics Data System (ADS)

    Yabe, Takashi; Mizoe, Hiroki; Takizawa, Kenji; Moriki, Hiroshi; Im, Hyo-Nam; Ogata, Youichi

    2004-02-01

    A new class of body-fitted grid system that can keep the third-order accuracy in time and space is proposed with the help of the CIP (constrained interpolation profile/cubic interpolated propagation) method. The grid system consists of the straight lines and grid points moving along these lines like abacus - Soroban in Japanese. The length of each line and the number of grid points in each line can be different. The CIP scheme is suitable to this mesh system and the calculation of large CFL (>10) at locally refined mesh is easily performed. Mesh generation and searching of upstream departure point are very simple and almost mesh-free treatment is possible. Adaptive grid movement and local mesh refinement are demonstrated.

  14. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets.

  15. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets. PMID:26824922

  16. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  17. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods.

    PubMed

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy.

  18. An Adaptive Mesh Refinement Strategy for Immersed Boundary/Interface Methods

    PubMed Central

    Li, Zhilin; Song, Peng

    2012-01-01

    An adaptive mesh refinement strategy is proposed in this paper for the Immersed Boundary and Immersed Interface methods for two-dimensional elliptic interface problems involving singular sources. The interface is represented by the zero level set of a Lipschitz function φ(x,y). Our adaptive mesh refinement is done within a small tube of |φ(x,y)|≤ δ with finer Cartesian meshes. The discrete linear system of equations is solved by a multigrid solver. The AMR methods could obtain solutions with accuracy that is similar to those on a uniform fine grid by distributing the mesh more economically, therefore, reduce the size of the linear system of the equations. Numerical examples presented show the efficiency of the grid refinement strategy. PMID:22670155

  19. Design of computer-generated beam-shaping holograms by iterative finite-element mesh adaption.

    PubMed

    Dresel, T; Beyerlein, M; Schwider, J

    1996-12-10

    Computer-generated phase-only holograms can be used for laser beam shaping, i.e., for focusing a given aperture with intensity and phase distributions into a pregiven intensity pattern in their focal planes. A numerical approach based on iterative finite-element mesh adaption permits the design of appropriate phase functions for the task of focusing into two-dimensional reconstruction patterns. Both the hologram aperture and the reconstruction pattern are covered by mesh mappings. An iterative procedure delivers meshes with intensities equally distributed over the constituting elements. This design algorithm adds new elementary focuser functions to what we call object-oriented hologram design. Some design examples are discussed.

  20. ENZO+MORAY: radiation hydrodynamics adaptive mesh refinement simulations with adaptive ray tracing

    NASA Astrophysics Data System (ADS)

    Wise, John H.; Abel, Tom

    2011-07-01

    We describe a photon-conserving radiative transfer algorithm, using a spatially-adaptive ray-tracing scheme, and its parallel implementation into the adaptive mesh refinement cosmological hydrodynamics code ENZO. By coupling the solver with the energy equation and non-equilibrium chemistry network, our radiation hydrodynamics framework can be utilized to study a broad range of astrophysical problems, such as stellar and black hole feedback. Inaccuracies can arise from large time-steps and poor sampling; therefore, we devised an adaptive time-stepping scheme and a fast approximation of the optically-thin radiation field with multiple sources. We test the method with several radiative transfer and radiation hydrodynamics tests that are given in Iliev et al. We further test our method with more dynamical situations, for example, the propagation of an ionization front through a Rayleigh-Taylor instability, time-varying luminosities and collimated radiation. The test suite also includes an expanding H II region in a magnetized medium, utilizing the newly implemented magnetohydrodynamics module in ENZO. This method linearly scales with the number of point sources and number of grid cells. Our implementation is scalable to 512 processors on distributed memory machines and can include the radiation pressure and secondary ionizations from X-ray radiation. It is included in the newest public release of ENZO.

  1. High hardness BaCb-(BxOy/BN) composites with 3D mesh-like fine grain-boundary structure by reactive spark plasma sintering.

    PubMed

    Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Grasso, Salvatore; Sakka, Yoshio; Tok, Alfred; Su, Liap Tat; Bosman, Michael; Ma, Jan

    2012-02-01

    Boron carbide B4C powders were subject to reactive spark plasma sintering (also known as field assisted sintering, pulsed current sintering or plasma assisted sintering) under nitrogen atmosphere. For an optimum hexagonal BN (h-BN) content estimated from X-ray diffraction measurements at approximately 0.4 wt%, the as-prepared BaCb-(BxOy/BN) ceramic shows values of Berkovich and Vickers hardness of 56.7 +/- 3.1 GPa and 39.3 +/- 7.6 GPa, respectively. These values are higher than for the vacuum SPS processed B4C pristine sample and the h-BN -mechanically-added samples. XRD and electronic microscopy data suggest that in the samples produced by reactive SPS in N2 atmosphere, and containing an estimated amount of 0.3-1.5% h-BN, the crystallite size of the boron carbide grains is decreasing with the increasing amount of N2, while for the newly formed lamellar h-BN the crystallite size is almost constant (approximately 30-50 nm). BN is located at the grain boundaries between the boron carbide grains and it is wrapped and intercalated by a thin layer of boron oxide. BxOy/BN forms a fine and continuous 3D mesh-like structure that is a possible reason for good mechanical properties.

  2. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    SciTech Connect

    Peron, Stephanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  3. An adaptive mesh finite volume method for the Euler equations of gas dynamics

    NASA Astrophysics Data System (ADS)

    Mungkasi, Sudi

    2016-06-01

    The Euler equations have been used to model gas dynamics for decades. They consist of mathematical equations for the conservation of mass, momentum, and energy of the gas. For a large time value, the solution may contain discontinuities, even when the initial condition is smooth. A standard finite volume numerical method is not able to give accurate solutions to the Euler equations around discontinuities. Therefore we solve the Euler equations using an adaptive mesh finite volume method. In this paper, we present a new construction of the adaptive mesh finite volume method with an efficient computation of the refinement indicator. The adaptive method takes action automatically at around places having inaccurate solutions. Inaccurate solutions are reconstructed to reduce the error by refining the mesh locally up to a certain level. On the other hand, if the solution is already accurate, then the mesh is coarsened up to another certain level to minimize computational efforts. We implement the numerical entropy production as the mesh refinement indicator. As a test problem, we take the Sod shock tube problem. Numerical results show that the adaptive method is more promising than the standard one in solving the Euler equations of gas dynamics.

  4. The direct simulation Monte Carlo method using unstructured adaptive mesh and its application

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

    2002-02-01

    The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright

  5. Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.

    PubMed

    Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F

    2009-11-28

    Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.

  6. Three dimensional hydrodynamic calculations with adaptive mesh refinement of the evolution of Rayleigh Taylor and Richtmyer Meshkov instabilities in converging geometry: Multi-mode perturbations

    SciTech Connect

    Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.

    1993-04-01

    The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.

  7. Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.

  8. Adaptive unstructured meshing for thermal stress analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Dechaumphai, Pramote

    1992-01-01

    An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.

  9. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  10. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  11. Applications of automatic mesh generation and adaptive methods in computational medicine

    SciTech Connect

    Schmidt, J.A.; Macleod, R.S.; Johnson, C.R.; Eason, J.C.

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  12. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  13. Automatic mesh adaptivity for CADIS and FW-CADIS neutronics modeling of difficult shielding problems

    SciTech Connect

    Ibrahim, A. M.; Peplow, D. E.; Mosher, S. W.; Wagner, J. C.; Evans, T. M.; Wilson, P. P.; Sawan, M. E.

    2013-07-01

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macro-material approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm de-couples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, obviating the need for a world-class super computer. (authors)

  14. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  15. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    SciTech Connect

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.

  16. A fast, robust, and simple implicit method for adaptive time-stepping on adaptive mesh-refinement grids

    NASA Astrophysics Data System (ADS)

    Commerçon, B.; Debout, V.; Teyssier, R.

    2014-03-01

    Context. Implicit solvers present strong limitations when used on supercomputing facilities and in particular for adaptive mesh-refinement codes. Aims: We present a new method for implicit adaptive time-stepping on adaptive mesh-refinement grids. We implement it in the radiation-hydrodynamics solver we designed for the RAMSES code for astrophysical purposes and, more particularly, for protostellar collapse. Methods: We briefly recall the radiation-hydrodynamics equations and the adaptive time-stepping methodology used for hydrodynamical solvers. We then introduce the different types of boundary conditions (Dirichlet, Neumann, and Robin) that are used at the interface between levels and present our implementation of the new method in the RAMSES code. The method is tested against classical diffusion and radiation-hydrodynamics tests, after which we present an application for protostellar collapse. Results: We show that using Dirichlet boundary conditions at level interfaces is a good compromise between robustness and accuracy and that it can be used in structure formation calculations. The gain in computational time over our former unique time step method ranges from factors of 5 to 50 depending on the level of adaptive time-stepping and on the problem. We successfully compare the old and new methods for protostellar collapse calculations that involve highly non linear physics. Conclusions: We have developed a simple but robust method for adaptive time-stepping of implicit scheme on adaptive mesh-refinement grids. It can be applied to a wide variety of physical problems that involve diffusion processes.

  17. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems.

    PubMed

    Li, Zhilin; Song, Peng

    2013-06-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515-527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method.

  18. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  19. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  20. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  1. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  2. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  3. Laser ray tracing in a parallel arbitrary Lagrangian-Eulerian adaptive mesh refinement hydrocode

    NASA Astrophysics Data System (ADS)

    Masters, N. D.; Kaiser, T. B.; Anderson, R. W.; Eder, D. C.; Fisher, A. C.; Koniges, A. E.

    2010-08-01

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray tracing in ALE-AMR. We present the basic concepts of laser ray tracing and our approach to efficiently traverse the adaptive mesh hierarchy.

  4. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  5. Adaptive Mesh Refinement for High Accuracy Wall Loss Determination in Accelerating Cavity Design

    SciTech Connect

    Ge, L

    2004-06-14

    This paper presents the improvement in wall loss determination when adaptive mesh refinement (AMR) methods are used with the parallel finite element eigensolver Omega3P. We show that significant reduction in the number of degrees of freedom (DOFs) as well as a faster rate of convergence can be achieved as compared with results from uniform mesh refinement in determining cavity wall loss to a desired accuracy. Test cases for which measurements are available will be examined, and comparison with uniform refinement results will be discussed.

  6. Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode

    SciTech Connect

    Masters, N D; Kaiser, T B; Anderson, R W; Eder, D C; Fisher, A C; Koniges, A E

    2009-09-28

    ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.

  7. Hybrid numerical method with adaptive overlapping meshes for solving nonstationary problems in continuum mechanics

    NASA Astrophysics Data System (ADS)

    Burago, N. G.; Nikitin, I. S.; Yakushev, V. L.

    2016-06-01

    Techniques that improve the accuracy of numerical solutions and reduce their computational costs are discussed as applied to continuum mechanics problems with complex time-varying geometry. The approach combines shock-capturing computations with the following methods: (1) overlapping meshes for specifying complex geometry; (2) elastic arbitrarily moving adaptive meshes for minimizing the approximation errors near shock waves, boundary layers, contact discontinuities, and moving boundaries; (3) matrix-free implementation of efficient iterative and explicit-implicit finite element schemes; (4) balancing viscosity (version of the stabilized Petrov-Galerkin method); (5) exponential adjustment of physical viscosity coefficients; and (6) stepwise correction of solutions for providing their monotonicity and conservativeness.

  8. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  9. Adaptive mesh refinement in curvilinear body-fitted grid systems

    NASA Astrophysics Data System (ADS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-10-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  10. Adaptive hp-FEM with dynamical meshes for transient heat and moisture transfer problems

    NASA Astrophysics Data System (ADS)

    Solin, Pavel; Dubcova, Lenka; Kruis, Jaroslav

    2010-04-01

    We are concerned with the time-dependent multiphysics problem of heat and moisture transfer in the context of civil engineering applications. The problem is challenging due to its multiscale nature (temperature usually propagates orders of magnitude faster than moisture), different characters of the two fields (moisture exhibits boundary layers which are not present in the temperature field), extremely long integration times (30 years or more), and lack of viable error control mechanisms. In order to solve the problem efficiently, we employ a novel multimesh adaptive higher-order finite element method (hp-FEM) based on dynamical meshes and adaptive time step control. We investigate the possibility to approximate the temperature and humidity fields on individual dynamical meshes equipped with mutually independent adaptivity mechanisms. Numerical examples related to a realistic nuclear reactor vessel simulation are presented.

  11. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  12. Integration over two-dimensional Brillouin zones by adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Henk, J.

    2001-07-01

    Adaptive mesh-refinement (AMR) schemes for integration over two-dimensional Brillouin zones are presented and their properties are investigated in detail. A salient feature of these integration techniques is that the grid of sampling points is automatically adapted to the integrand in such a way that regions with high accuracy demand are sampled with high density, while the other regions are sampled with low density. This adaptation may save a sizable amount of computation time in comparison with those integration methods without mesh refinement. Several AMR schemes for one- and two-dimensional integration are introduced. As an application, the spin-dependent conductance of electronic tunneling through planar junctions is investigated and discussed with regard to Brillouin zone integration.

  13. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  14. Solving kinetic equations with adaptive mesh in phase space for rarefied gas dynamics and plasma physics (Invited)

    SciTech Connect

    Kolobov, Vladimir; Arslanbekov, Robert; Frolova, Anna

    2014-12-09

    The paper describes an Adaptive Mesh in Phase Space (AMPS) technique for solving kinetic equations with deterministic mesh-based methods. The AMPS technique allows automatic generation of adaptive Cartesian mesh in both physical and velocity spaces using a Tree-of-Trees data structure. We illustrate advantages of AMPS for simulations of rarefied gas dynamics and electron kinetics on low temperature plasmas. In particular, we consider formation of the velocity distribution functions in hypersonic flows, particle kinetics near oscillating boundaries, and electron kinetics in a radio-frequency sheath. AMPS provide substantial savings in computational cost and increased efficiency of the mesh-based kinetic solvers.

  15. Solving kinetic equations with adaptive mesh in phase space for rarefied gas dynamics and plasma physics (Invited)

    NASA Astrophysics Data System (ADS)

    Kolobov, Vladimir; Arslanbekov, Robert; Frolova, Anna

    2014-12-01

    The paper describes an Adaptive Mesh in Phase Space (AMPS) technique for solving kinetic equations with deterministic mesh-based methods. The AMPS technique allows automatic generation of adaptive Cartesian mesh in both physical and velocity spaces using a Tree-of-Trees data structure. We illustrate advantages of AMPS for simulations of rarefied gas dynamics and electron kinetics on low temperature plasmas. In particular, we consider formation of the velocity distribution functions in hypersonic flows, particle kinetics near oscillating boundaries, and electron kinetics in a radio-frequency sheath. AMPS provide substantial savings in computational cost and increased efficiency of the mesh-based kinetic solvers.

  16. Practical improvements of multi-grid iteration for adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Miyashita, Hisashi; Yamada, Yoshiyuki

    2005-03-01

    Adaptive mesh refinement(AMR) is a powerful tool to efficiently solve multi-scaled problems. However, the vanilla AMR method has a well-known critical demerit, i.e., it cannot be applied to non-local problems. Although multi-grid iteration (MGI) can be regarded as a good remedy for a non-local problem such as the Poisson equation, we observed fundamental difficulties in applying the MGI technique in AMR to realistic problems under complicated mesh layouts because it does not converge or it requires too many iterations even if it does converge. To cope with the problem, when updating the next approximation in the MGI process, we calculate the precise total corrections that are relatively accurate to the current residual by introducing a new iteration for such a total correction. This procedure greatly accelerates the MGI convergence speed especially under complicated mesh layouts.

  17. Implementation and application of adaptive mesh refinement for thermochemical mantle convection studies

    NASA Astrophysics Data System (ADS)

    Leng, Wei; Zhong, Shijie

    2011-04-01

    Numerical modeling of mantle convection is challenging. Owing to the multiscale nature of mantle dynamics, high resolution is often required in localized regions, with coarser resolution being sufficient elsewhere. When investigating thermochemical mantle convection, high resolution is required to resolve sharp and often discontinuous boundaries between distinct chemical components. In this paper, we present a 2-D finite element code with adaptive mesh refinement techniques for simulating compressible thermochemical mantle convection. By comparing model predictions with a range of analytical and previously published benchmark solutions, we demonstrate the accuracy of our code. By refining and coarsening the mesh according to certain criteria and dynamically adjusting the number of particles in each element, our code can simulate such problems efficiently, dramatically reducing the computational requirements (in terms of memory and CPU time) when compared to a fixed, uniform mesh simulation. The resolving capabilities of the technique are further highlighted by examining plume-induced entrainment in a thermochemical mantle convection simulation.

  18. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification.

  19. Accessible bioprinting: adaptation of a low-cost 3D-printer for precise cell placement and stem cell differentiation.

    PubMed

    Reid, John A; Mollica, Peter A; Johnson, Garett D; Ogle, Roy C; Bruno, Robert D; Sachs, Patrick C

    2016-06-01

    The precision and repeatability offered by computer-aided design and computer-numerically controlled techniques in biofabrication processes is quickly becoming an industry standard. However, many hurdles still exist before these techniques can be used in research laboratories for cellular and molecular biology applications. Extrusion-based bioprinting systems have been characterized by high development costs, injector clogging, difficulty achieving small cell number deposits, decreased cell viability, and altered cell function post-printing. To circumvent the high-price barrier to entry of conventional bioprinters, we designed and 3D printed components for the adaptation of an inexpensive 'off-the-shelf' commercially available 3D printer. We also demonstrate via goal based computer simulations that the needle geometries of conventional commercially standardized, 'luer-lock' syringe-needle systems cause many of the issues plaguing conventional bioprinters. To address these performance limitations we optimized flow within several microneedle geometries, which revealed a short tapered injector design with minimal cylindrical needle length was ideal to minimize cell strain and accretion. We then experimentally quantified these geometries using pulled glass microcapillary pipettes and our modified, low-cost 3D printer. This systems performance validated our models exhibiting: reduced clogging, single cell print resolution, and maintenance of cell viability without the use of a sacrificial vehicle. Using this system we show the successful printing of human induced pluripotent stem cells (hiPSCs) into Geltrex and note their retention of a pluripotent state 7 d post printing. We also show embryoid body differentiation of hiPSC by injection into differentiation conducive environments, wherein we observed continuous growth, emergence of various evaginations, and post-printing gene expression indicative of the presence of all three germ layers. These data demonstrate an

  20. Accessible bioprinting: adaptation of a low-cost 3D-printer for precise cell placement and stem cell differentiation.

    PubMed

    Reid, John A; Mollica, Peter A; Johnson, Garett D; Ogle, Roy C; Bruno, Robert D; Sachs, Patrick C

    2016-06-01

    The precision and repeatability offered by computer-aided design and computer-numerically controlled techniques in biofabrication processes is quickly becoming an industry standard. However, many hurdles still exist before these techniques can be used in research laboratories for cellular and molecular biology applications. Extrusion-based bioprinting systems have been characterized by high development costs, injector clogging, difficulty achieving small cell number deposits, decreased cell viability, and altered cell function post-printing. To circumvent the high-price barrier to entry of conventional bioprinters, we designed and 3D printed components for the adaptation of an inexpensive 'off-the-shelf' commercially available 3D printer. We also demonstrate via goal based computer simulations that the needle geometries of conventional commercially standardized, 'luer-lock' syringe-needle systems cause many of the issues plaguing conventional bioprinters. To address these performance limitations we optimized flow within several microneedle geometries, which revealed a short tapered injector design with minimal cylindrical needle length was ideal to minimize cell strain and accretion. We then experimentally quantified these geometries using pulled glass microcapillary pipettes and our modified, low-cost 3D printer. This systems performance validated our models exhibiting: reduced clogging, single cell print resolution, and maintenance of cell viability without the use of a sacrificial vehicle. Using this system we show the successful printing of human induced pluripotent stem cells (hiPSCs) into Geltrex and note their retention of a pluripotent state 7 d post printing. We also show embryoid body differentiation of hiPSC by injection into differentiation conducive environments, wherein we observed continuous growth, emergence of various evaginations, and post-printing gene expression indicative of the presence of all three germ layers. These data demonstrate an

  1. Radiation dose reduction for coronary artery calcium scoring at 320-detector CT with adaptive iterative dose reduction 3D.

    PubMed

    Tatsugami, Fuminari; Higaki, Toru; Fukumoto, Wataru; Kaichi, Yoko; Fujioka, Chikako; Kiguchi, Masao; Yamamoto, Hideya; Kihara, Yasuki; Awai, Kazuo

    2015-06-01

    To assess the possibility of reducing the radiation dose for coronary artery calcium (CAC) scoring by using adaptive iterative dose reduction 3D (AIDR 3D) on a 320-detector CT scanner. Fifty-four patients underwent routine- and low-dose CT for CAC scoring. Low-dose CT was performed at one-third of the tube current used for routine-dose CT. Routine-dose CT was reconstructed with filtered back projection (FBP) and low-dose CT was reconstructed with AIDR 3D. We compared the calculated Agatston-, volume-, and mass scores of these images. The overall percentage difference in the Agatston-, volume-, and mass scores between routine- and low-dose CT studies was 15.9, 11.6, and 12.6%, respectively. There were no significant differences in the routine- and low-dose CT studies irrespective of the scoring algorithms applied. The CAC measurements of both imaging modalities were highly correlated with respect to the Agatston- (r = 0.996), volume- (r = 0.996), and mass score (r = 0.997; p < 0.001, all); the Bland-Altman limits of agreement scores were -37.4 to 51.4, -31.2 to 36.4 and -30.3 to 40.9%, respectively, suggesting that AIDR 3D was a good alternative for FBP. The mean effective radiation dose for routine- and low-dose CT was 2.2 and 0.7 mSv, respectively. The use of AIDR 3D made it possible to reduce the radiation dose by 67% for CAC scoring without impairing the quantification of coronary calcification. PMID:25754302

  2. Amoeboid migration mode adaption in quasi-3D spatial density gradients of varying lattice geometry

    NASA Astrophysics Data System (ADS)

    Gorelashvili, Mari; Emmert, Martin; Hodeck, Kai F.; Heinrich, Doris

    2014-07-01

    Cell migration processes are controlled by sensitive interaction with external cues such as topographic structures of the cell’s environment. Here, we present systematically controlled assays to investigate the specific effects of spatial density and local geometry of topographic structure on amoeboid migration of Dictyostelium discoideum cells. This is realized by well-controlled fabrication of quasi-3D pillar fields exhibiting a systematic variation of inter-pillar distance and pillar lattice geometry. By time-resolved local mean-squared displacement analysis of amoeboid migration, we can extract motility parameters in order to elucidate the details of amoeboid migration mechanisms and consolidate them in a two-state contact-controlled motility model, distinguishing directed and random phases. Specifically, we find that directed pillar-to-pillar runs are found preferably in high pillar density regions, and cells in directed motion states sense pillars as attractive topographic stimuli. In contrast, cell motion in random probing states is inhibited by high pillar density, where pillars act as obstacles for cell motion. In a gradient spatial density, these mechanisms lead to topographic guidance of cells, with a general trend towards a regime of inter-pillar spacing close to the cell diameter. In locally anisotropic pillar environments, cell migration is often found to be damped due to competing attraction by different pillars in close proximity and due to lack of other potential stimuli in the vicinity of the cell. Further, we demonstrate topographic cell guidance reflecting the lattice geometry of the quasi-3D environment by distinct preferences in migration direction. Our findings allow to specifically control amoeboid cell migration by purely topographic effects and thus, to induce active cell guidance. These tools hold prospects for medical applications like improved wound treatment, or invasion assays for immune cells.

  3. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  4. Numerical simulation of acoustic holography with propagator adaptation. Application to a 3D disc

    NASA Astrophysics Data System (ADS)

    Martin, Vincent; Le Bourdon, Thibault; Pasqual, Alexander Mattioli

    2011-08-01

    Acoustical holography can be used to identify the vibration velocity of an extended vibrating body. Such an inverse problem relies on the radiated acoustic pressure measured by a microphone array and on an a priori knowledge of the way the body radiates sound. Any perturbation on the radiation model leads to a perturbation on the velocity identified by the inversion process. Thus, to obtain the source vibration velocity with a good precision, it is useful to identify also an appropriate propagation model. Here, this identification, or adaptation, procedure rests on a geometrical interpretation of the acoustic holography in the objective space (here the radiated pressure space equipped with the L2-norm) and on a genetic algorithm. The propagator adaptation adds information to the holographic process, so it is not a regularisation method, which approximates the inverse of the model but does not affect the model. Moreover regularisations act in the variables space, here the velocities space. It is shown that an adapted model significantly decreases the quantity of regularisation needed to obtain a good reconstructed velocity, and that model adaptation improves significantly the acoustical holography results. In the presence of perturbations on the radiated pressure, some indications will be given on the interest or not to adapt the model, again thanks to the geometrical interpretation of holography in the objective space. As a numerical example, a disc whose vibration velocity on one of its sides is identified by acoustic holography is presented. On an industrial scale, this problem occurs due to the noise radiated by car wheels. The assessment of the holographic results has not yet been rigorously performed in such situations due to the complexity of the wheel environment made up of the car body, road and rolling conditions.

  5. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  6. Simulating Multi-scale Fluid Flows Using Adaptive Mesh Refinement Methods

    NASA Astrophysics Data System (ADS)

    Rowe, Kristopher; Lamb, Kevin

    2015-11-01

    When modelling flows with disparate length scales one must use a computational mesh that is fine enough to capture the smallest phenomena of interest. Traditional computational fluid dynamics models apply a mesh of uniform resolution to the entire computational domain; however, if the smallest scales of interest are isolated much of the computational resources used in these simulations will be wasted in regions where they are not needed. Adaptive mesh refinement methods seek to only apply resolution where it is needed. Beginning with a single coarse grid, a nested hierarchy of block structured grids is built in regions of the fluid flow where more resolution is necessary. As the fluid flow varies in time this hierarchy of grids is dynamically rebuilt to follow the phenomena of interest. Through the modelling of the interaction of vortices with wall boundary layers, it will be demonstrated that adaptive mesh refinement methods will produce equivalent results to traditional single resolution codes while using less processors, memory, and wall-clock time. Additionally, it is possible to model such flows to higher Reynolds numbers than have been feasible previously. This work was supported by NSERC and SHARCNET.

  7. Adaptive laser beam forming for laser shock micro-forming for 3D MEMS devices fabrication

    NASA Astrophysics Data System (ADS)

    Zou, Ran; Wang, Shuliang; Wang, Mohan; Li, Shuo; Huang, Sheng; Lin, Yankun; Chen, Kevin P.

    2016-07-01

    Laser shock micro-forming is a non-thermal laser forming method that use laser-induced shockwave to modify surface properties and to adjust shapes and geometry of work pieces. In this paper, we present an adaptive optical technique to engineer spatial profiles of the laser beam to exert precision control on the laser shock forming process for free-standing MEMS structures. Using a spatial light modulator, on-target laser energy profiles are engineered to control shape, size, and deformation magnitude, which has led to significant improvement of the laser shock processing outcome at micrometer scales. The results presented in this paper show that the adaptive-optics laser beam forming is an effective method to improve both quality and throughput of the laser forming process at micrometer scales.

  8. Adaptive enhancement and visualization techniques for 3D THz images of breast cancer tumors

    NASA Astrophysics Data System (ADS)

    Wu, Yuhao; Bowman, Tyler; Gauch, John; El-Shenawee, Magda

    2016-03-01

    This paper evaluates image enhancement and visualization techniques for pulsed terahertz (THz) images of tissue samples. Specifically, our research objective is to effectively differentiate between heterogeneous regions of breast tissues that contain tumors diagnosed as triple negative infiltrating ductal carcinoma (IDC). Tissue slices and blocks of varying thicknesses were prepared and scanned using our lab's THz pulsed imaging system. One of the challenges we have encountered in visualizing the obtained images and differentiating between healthy and cancerous regions of the tissues is that most THz images have a low level of details and narrow contrast, making it difficult to accurately identify and visualize the margins around the IDC. To overcome this problem, we have applied and evaluated a number of image processing techniques to the scanned 3D THz images. In particular, we employed various spatial filtering and intensity transformation techniques to emphasize the small details in the images and adjust the image contrast. For each of these methods, we investigated how varying filter sizes and parameters affect the amount of enhancement applied to the images. Our experimentation shows that several image processing techniques are effective in producing THz images of breast tissue samples that contain distinguishable details, making further segmentation of the different image regions promising.

  9. Directional adaptive deformable models for segmentation with application to 2D and 3D medical images

    NASA Astrophysics Data System (ADS)

    Rougon, Nicolas F.; Preteux, Francoise J.

    1993-09-01

    In this paper, we address the problem of adapting the functions controlling the material properties of 2D snakes, and show how introducing oriented smoothness constraints results in a novel class of active contour models for segmentation which extends standard isotropic inhomogeneous membrane/thin-plate stabilizers. These constraints, expressed as adaptive L2 matrix norms, are defined by two 2nd-order symmetric and positive definite tensors which are invariant with respect to rigid motions in the image plane. These tensors, equivalent to directional adaptive stretching and bending densities, are quadratic with respect to 1st- and 2nd-order derivatives of the image intensity, respectively. A representation theorem specifying their canonical form is established and a geometrical interpretation of their effects if developed. Within this framework, it is shown that, by achieving a directional control of regularization, such non-isotropic constraints consistently relate the differential properties (metric and curvature) of the deformable model with those of the underlying intensity surface, yielding a satisfying preservation of image contour characteristics.

  10. 3D positional control of magnetic levitation system using adaptive control: improvement of positioning control in horizontal plane

    NASA Astrophysics Data System (ADS)

    Nishino, Toshimasa; Fujitani, Yasuhiro; Kato, Norihiko; Tsuda, Naoaki; Nomura, Yoshihiko; Matsui, Hirokazu

    2012-01-01

    The objective of this paper is to establish a technique that levitates and conveys a hand, a kind of micro-robot, by applying magnetic forces: the hand is assumed to have a function of holding and detaching the objects. The equipment to be used in our experiments consists of four pole-pieces of electromagnets, and is expected to work as a 4DOF drive unit within some restricted range of 3D space: the three DOF are corresponding to 3D positional control and the remaining one DOF, rotational oscillation damping control. Having used the same equipment, Khamesee et al. had manipulated the impressed voltages on the four electric magnetics by a PID controller by the use of the feedback signal of the hand's 3D position, the controlled variable. However, in this system, there were some problems remaining: in the horizontal direction, when translating the hand out of restricted region, positional control performance was suddenly degraded. The authors propose a method to apply an adaptive control to the horizontal directional control. It is expected that the technique to be presented in this paper contributes not only to the improvement of the response characteristic but also to widening the applicable range in the horizontal directional control.

  11. Defect structure of a nematic liquid crystal around a spherical particle: adaptive mesh refinement approach.

    PubMed

    Fukuda, Jun-ichi; Yoneya, Makoto; Yokoyama, Hiroshi

    2002-04-01

    We investigate numerically the structure of topological defects close to a spherical particle immersed in a uniformly aligned nematic liquid crystal. To this end we have implemented an adaptive mesh refinement scheme in an axi-symmetric three-dimensional system, which makes it feasible to take into account properly the large length scale difference between the particle and the topological defects. The adaptive mesh refinement scheme proves to be quite efficient and useful in the investigation of not only the macroscopic properties such as the defect position but also the fine structure of defects. It can be shown that a hyperbolic hedgehog that accompanies a particle with strong homeotropic anchoring takes the structure of a ring.

  12. High-Performance Reactive Fluid Flow Simulations Using Adaptive Mesh Refinement on Thousands of Processors

    NASA Astrophysics Data System (ADS)

    Calder, A. C.; Curtis, B. C.; Dursi, L. J.; Fryxell, B.; Henry, G.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Tufo, H. M.; Truran, J. W.; Zingale, M.

    We present simulations and performance results of nuclear burning fronts in supernovae on the largest domain and at the finest spatial resolution studied to date. These simulations were performed on the Intel ASCI-Red machine at Sandia National Laboratories using FLASH, a code developed at the Center for Astrophysical Thermonuclear Flashes at the University of Chicago. FLASH is a modular, adaptive mesh, parallel simulation code capable of handling compressible, reactive fluid flows in astrophysical environments. FLASH is written primarily in Fortran 90, uses the Message-Passing Interface library for inter-processor communication and portability, and employs the PARAMESH package to manage a block-structured adaptive mesh that places blocks only where the resolution is required and tracks rapidly changing flow features, such as detonation fronts, with ease. We describe the key algorithms and their implementation as well as the optimizations required to achieve sustained performance of 238 GLOPS on 6420 processors of ASCI-Red in 64-bit arithmetic.

  13. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  14. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  15. Unstructured adaptive mesh computations of rotorcraft high-speed impulsive noise

    NASA Technical Reports Server (NTRS)

    Strawn, Roger; Garceau, Michael; Biswas, Rupak

    1993-01-01

    A new method is developed for modeling helicopter high-speed impulsive (HSI) noise. The aerodynamics and acoustics near the rotor blade tip are computed by solving the Euler equations on an unstructured grid. A stationary Kirchhoff surface integral is then used to propagate these acoustic signals to the far field. The near-field Euler solver uses a solution-adaptive grid scheme to improve the resolution of the acoustic signal. Grid points are locally added and/or deleted from the mesh at each adaptive step. An important part of this procedure is the choice of an appropriate error indicator. The error indicator is computed from the flow field solution and determines the regions for mesh coarsening and refinement. Computed results for HSI noise compare favorably with experimental data for three different hovering rotor cases.

  16. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  17. A low order flow/acoustics interaction method for the prediction of sound propagation using 3D adaptive hybrid grids

    SciTech Connect

    Kallinderis, Yannis; Vitsas, Panagiotis A.; Menounou, Penelope

    2012-07-15

    A low-order flow/acoustics interaction method for the prediction of sound propagation and diffraction in unsteady subsonic compressible flow using adaptive 3-D hybrid grids is investigated. The total field is decomposed into the flow field described by the Euler equations, and the acoustics part described by the Nonlinear Perturbation Equations. The method is shown capable of predicting monopole sound propagation, while employment of acoustics-guided adapted grid refinement improves the accuracy of capturing the acoustic field. Interaction of sound with solid boundaries is also examined in terms of reflection, and diffraction. Sound propagation through an unsteady flow field is examined using static and dynamic flow/acoustics coupling demonstrating the importance of the latter.

  18. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    SciTech Connect

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  19. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  20. Quantitative Evaluation of Tissue Surface Adaption of CAD-Designed and 3D Printed Wax Pattern of Maxillary Complete Denture

    PubMed Central

    Chen, Hu; Wang, Han; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objective. To quantitatively evaluate the tissue surface adaption of a maxillary complete denture wax pattern produced by CAD and 3DP. Methods. A standard edentulous maxilla plaster cast model was used, for which a wax pattern of complete denture was designed using CAD software developed in our previous study and printed using a 3D wax printer, while another wax pattern was manufactured by the traditional manual method. The cast model and the two wax patterns were scanned in the 3D scanner as “DataModel,” “DataWaxRP,” and “DataWaxManual.” After setting each wax pattern on the plaster cast, the whole model was scanned for registration. After registration, the deviations of tissue surface between “DataModel” and “DataWaxRP” and between “DataModel” and “DataWaxManual” were measured. The data was analyzed by paired t-test. Results. For both wax patterns produced by the CAD&RP method and the manual method, scanning data of tissue surface and cast surface showed a good fit in the majority. No statistically significant (P > 0.05) difference was observed between the CAD&RP method and the manual method. Conclusions. Wax pattern of maxillary complete denture produced by the CAD&3DP method is comparable with traditional manual method in the adaption to the edentulous cast model. PMID:26583108

  1. A parallel dynamic load balancing algorithm for 3-D adaptive unstructured grids

    NASA Technical Reports Server (NTRS)

    Vidwans, A.; Kallinderis, Y.; Venkatakrishnan, V.

    1993-01-01

    Adaptive local grid refinement and coarsening results in unequal distribution of workload among the processors of a parallel system. A novel method for balancing the load in cases of dynamically changing tetrahedral grids is developed. The approach employs local exchange of cells among processors in order to redistribute the load equally. An important part of the load balancing algorithm is the method employed by a processor to determine which cells within its subdomain are to be exchanged. Two such methods are presented and compared. The strategy for load balancing is based on the Divide-and-Conquer approach which leads to an efficient parallel algorithm. This method is implemented on a distributed-memory MIMD system.

  2. Using the Chombo Adaptive Mesh Refinement Model in Shallow Water Mode to Simulate Interactions of Tropical Cyclone-like Vortices

    NASA Astrophysics Data System (ADS)

    Ferguson, J. O.; Jablonowski, C.; Johansen, H.; McCorquodale, P.; Ullrich, P. A.

    2015-12-01

    Complex multi-scale atmospheric phenomena such as tropical cyclones challenge the coarse uniform grids of convectional climate models. Adaptive mesh refinement (AMR) techniques seek to mitigate these problems by providing sufficiently high-resolution grid patches only over features of interests while limiting the computational burden of requiring such resolutions globally. One such model is the non-hydrostatic, finite-volume Chombo-AMR general circulation model (GCM), which implements refinement in both space and time on a cubed-sphere grid. The 2D shallow-water equations exhibit many of the complexities of 3D GCM dynamical cores and serve as an effective method for testing the dynamical core and the refinement strategies of adaptive atmospheric models. We implement a shallow-water test case consisting of a pair of interacting tropical cyclone-like vortices. Small changes in the initial conditions can lead to a variety of interactions that develop fine-scale spiral band structures and large-scale wave trains. We investigate the accuracy and efficiency of AMR's ability to capture and effectively follow the evolution of the vortices in time. These simulations serve to test the effectiveness of refinement for both static and dynamic grid configurations as well as the sensitivity of the model results to the refinement criteria.

  3. Fluidity: a fully-unstructured adaptive mesh computational framework for geodynamics

    NASA Astrophysics Data System (ADS)

    Kramer, S. C.; Davies, D.; Wilson, C. R.

    2010-12-01

    Fluidity is a finite element, finite volume fluid dynamics model developed by the Applied Modelling and Computation Group at Imperial College London. Several features of the model make it attractive for use in geodynamics. A core finite element library enables the rapid implementation and investigation of new numerical schemes. For example, the function spaces used for each variable can be changed allowing properties of the discretisation, such as stability, conservation and balance, to be easily varied and investigated. Furthermore, unstructured, simplex meshes allow the underlying resolution to vary rapidly across the computational domain. Combined with dynamic mesh adaptivity, where the mesh is periodically optimised to the current conditions, this allows significant savings in computational cost over traditional chessboard-like structured mesh simulations [1]. In this study we extend Fluidity (using the Portable, Extensible Toolkit for Scientific Computation [PETSc, 2]) to Stokes flow problems relevant to geodynamics. However, due to the assumptions inherent in all models, it is necessary to properly verify and validate the code before applying it to any large-scale problems. In recent years this has been made easier by the publication of a series of ‘community benchmarks’ for geodynamic modelling. We discuss the use of several of these to help validate Fluidity [e.g. 3, 4]. The experimental results of Vatteville et al. [5] are then used to validate Fluidity against laboratory measurements. This test case is also used to highlight the computational advantages of using adaptive, unstructured meshes - significantly reducing the number of nodes and total CPU time required to match a fixed mesh simulation. References: 1. C. C. Pain et al. Comput. Meth. Appl. M, 190:3771-3796, 2001. doi:10.1016/S0045-7825(00)00294-2. 2. B. Satish et al. http://www.mcs.anl.gov/petsc/petsc-2/, 2001. 3. Blankenbach et al. Geophys. J. Int., 98:23-28, 1989. 4. Busse et al. Geophys

  4. Parametric Characterization of Porous 3D Bioscaffolds Fabricated by an Adaptive Foam Reticulation Technique

    NASA Astrophysics Data System (ADS)

    Winnett, James; Mallick, Kajal K.

    2014-04-01

    Commercially pure titanium (Ti) and its alloys, in particular, titanium-vanadium-aluminium (Ti-6Al-4V), have been used as biomaterials due to their mechanical similarities to bone, good biocompatibility, and inertness in vivo. The introduction of porosity to the scaffolds leads to optimized mechanical properties and enhanced biological activity. The adaptive foam reticulation (AFR) technique has been previously used to generate hydroxyapatite bioscaffolds with enhanced cell behavior due to the generation of macroporous structures with microporous struts that provided routes for cell infiltration as well as attachment sites. Sacrificial polyurethane templates of 45 ppi and 90 ppi were coated in biomaterial-based slurries containing either Ti or Ti-6Al-4V as the biomaterial and camphene as the porogen. The resultant macropore sizes of 100-550 μm corresponded well with the initial template pore sizes while camphene produced micropores of 1-10 μm, with the level of microporosity related to the amount of porogen inclusion.

  5. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  6. NOTE: Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    NASA Astrophysics Data System (ADS)

    Narayanan, R.; Werahera, P. N.; Barqawi, A.; Crawford, E. D.; Shinohara, K.; Simoneau, A. R.; Suri, J. S.

    2008-10-01

    when TRUS guided biopsies are assisted by the 3D prostate cancer atlas compared to the current standard of care. The fast registration algorithm we have developed can easily be adapted for clinical applications for the improved diagnosis of prostate cancer.

  7. Investigation of Adaptive Responses in Bystander Cells in 3D Cultures Containing Tritium-Labeled and Unlabeled Normal Human Fibroblasts

    PubMed Central

    Pinto, Massimo; Azzam, Edouard I.; Howell, Roger W.

    2010-01-01

    The study of radiation-induced bystander effects in normal human cells maintained in three-dimensional (3D) architecture provides more in vivo-like conditions and is relevant to human risk assessment. Linear energy transfer, dose and dose rate have been considered as critical factors in propagating radiation-induced effects. This investigation uses an in vitro 3D tissue culture model in which normal AG1522 human fibroblasts are grown in a carbon scaffold to investigate induction of a G1 arrest in bystander cells that neighbor radiolabeled cells. Cell cultures were co-pulse-labeled with [3H]deoxycytidine (3HdC) to selectively irradiate a minor fraction of cells with 1–5 keV/μm β particles and bromodeoxyuridine (BrdU) to identify the radiolabeled cells using immunofluorescence. The induction of a G1 arrest was measured specifically in unlabeled cells (i.e. bystander cells) using a flow cytometry-based version of the cumulative labeling index assay. To investigate the relationship between bystander effects and adaptive responses, cells were challenged with an acute 4 Gy γ-radiation dose after they had been kept under the bystander conditions described above for several hours, and the regulation of the radiation-induced G1 arrest was measured selectively in bystander cells. When the average dose rate in 3HdC-labeled cells (<16% of population) was 0.04–0.37 Gy/h (average accumulated dose 0.14–10 Gy), no statistically significant stressful bystander effects or adaptive bystander effects were observed as measured by magnitude of the G1 arrest, micronucleus formation, or changes in mitochondrial membrane potential. Higher dose rates and/or higher LET may be required to observe stressful bystander effects in this experimental system, whereas lower dose rates and challenge doses may be required to detect adaptive bystander responses. PMID:20681788

  8. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  9. Lithium ion intercalation of 3-D vertical hierarchical TiO2 nanotubes on a titanium mesh for efficient photoelectrochemical water splitting.

    PubMed

    Xin, Yanmei; Cheng, Yuxiao; Zhou, Yuyan; Li, Zhenzhen; Wu, Hongjun; Zhang, Zhonghai

    2016-03-25

    In this communication, we report for the first time the demonstration of a lithium ion intercalation strategy to significantly enhance the photoelectrochemical water splitting performance on 3-dimensional vertical hierarchical top-porous-bottom-tubular TiO2 nanotubes on a fabricable titanium mesh. PMID:26935068

  10. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  11. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  12. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Astrophysics Data System (ADS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-11-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  13. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  14. Segmentation of heterogeneous or small FDG PET positive tissue based on a 3D-locally adaptive random walk algorithm.

    PubMed

    Onoma, D P; Ruan, S; Thureau, S; Nkhali, L; Modzelewski, R; Monnehan, G A; Vera, P; Gardin, I

    2014-12-01

    A segmentation algorithm based on the random walk (RW) method, called 3D-LARW, has been developed to delineate small tumors or tumors with a heterogeneous distribution of FDG on PET images. Based on the original algorithm of RW [1], we propose an improved approach using new parameters depending on the Euclidean distance between two adjacent voxels instead of a fixed one and integrating probability densities of labels into the system of linear equations used in the RW. These improvements were evaluated and compared with the original RW method, a thresholding with a fixed value (40% of the maximum in the lesion), an adaptive thresholding algorithm on uniform spheres filled with FDG and FLAB method, on simulated heterogeneous spheres and on clinical data (14 patients). On these three different data, 3D-LARW has shown better segmentation results than the original RW algorithm and the three other methods. As expected, these improvements are more pronounced for the segmentation of small or tumors having heterogeneous FDG uptake.

  15. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  16. A new adaptive mesh refinement data structure with an application to detonation

    NASA Astrophysics Data System (ADS)

    Ji, Hua; Lien, Fue-Sang; Yee, Eugene

    2010-11-01

    A new Cell-based Structured Adaptive Mesh Refinement (CSAMR) data structure is developed. In our CSAMR data structure, Cartesian-like indices are used to identify each cell. With these stored indices, the information on the parent, children and neighbors of a given cell can be accessed simply and efficiently. Owing to the usage of these indices, the computer memory required for storage of the proposed AMR data structure is only {5}/{8} word per cell, in contrast to the conventional oct-tree [P. MacNeice, K.M. Olson, C. Mobary, R. deFainchtein, C. Packer, PARAMESH: a parallel adaptive mesh refinement community toolkit, Comput. Phys. Commun. 330 (2000) 126] and the fully threaded tree (FTT) [A.M. Khokhlov, Fully threaded tree algorithms for adaptive mesh fluid dynamics simulations, J. Comput. Phys. 143 (1998) 519] data structures which require, respectively, 19 and 2{3}/{8} words per cell for storage of the connectivity information. Because the connectivity information (e.g., parent, children and neighbors) of a cell in our proposed AMR data structure can be accessed using only the cell indices, a tree structure which was required in previous approaches for the organization of the AMR data is no longer needed for this new data structure. Instead, a much simpler hash table structure is used to maintain the AMR data, with the entry keys in the hash table obtained directly from the explicitly stored cell indices. The proposed AMR data structure simplifies the implementation and parallelization of an AMR code. Two three-dimensional test cases are used to illustrate and evaluate the computational performance of the new CSAMR data structure.

  17. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution.

    PubMed

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-23

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  18. Transient thermal-structural analysis using adaptive unstructured remeshing and mesh movement

    NASA Technical Reports Server (NTRS)

    Dechaumphai, Pramote; Morgan, Kenneth

    1990-01-01

    An adaptive unstructured remeshing technique is applied to transient thermal-structural analysis. The effectiveness of the technique, together with the finite element method and an error estimation technique, is evaluated by two applications which have exact solutions: (1) the steady-state thermal analysis of a plate subjected to a highly localized surface heating, and (2) the transient thermal-structural analysis of a simulated convectively cooled leading edge subjected to a translating heat source. These applications demonstrate that the remeshing technique significantly reduces the problem size as well as the analysis solution error as compared to the results produced using standard structured meshes.

  19. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  20. Compact integration factor methods for complex domains and adaptive mesh refinement

    PubMed Central

    Liu, Xinfeng; Nie, Qing

    2010-01-01

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed. PMID:20543883

  1. Cell-based Adaptive Mesh Refinement on the GPU with Applications to Exascale Supercomputing

    NASA Astrophysics Data System (ADS)

    Trujillo, Dennis; Robey, Robert; Davis, Neal; Nicholaeff, David

    2011-10-01

    We present an OpenCL implementation of a cell-based adaptive mesh refinement (AMR) scheme for the shallow water equations. The challenges associated with ensuring the locality of algorithm architecture to fully exploit the massive number of parallel threads on the GPU is discussed. This includes a proof of concept that a cell-based AMR code can be effectively implemented, even on a small scale, in the memory and threading model provided by OpenCL. Additionally, the program requires dynamic memory in order to properly implement the mesh; as this is not supported in the OpenCL 1.1 standard, a combination of CPU memory management and GPU computation effectively implements a dynamic memory allocation scheme. Load balancing is achieved through a new stencil-based implementation of a space-filling curve, eliminating the need for a complete recalculation of the indexing on the mesh. A cartesian grid hash table scheme to allow fast parallel neighbor accesses is also discussed. Finally, the relative speedup of the GPU-enabled AMR code is compared to the original serial version. We conclude that parallelization using the GPU provides significant speedup for typical numerical applications and is feasible for scientific applications in the next generation of supercomputing.

  2. Transmission mode adaptive beamforming for planar phased arrays and its application to 3D ultrasonic transcranial imaging

    NASA Astrophysics Data System (ADS)

    Shapoori, Kiyanoosh; Sadler, Jeffrey; Wydra, Adrian; Malyarenko, Eugene; Sinclair, Anthony; Maev, Roman G.

    2013-03-01

    A new adaptive beamforming method for accurately focusing ultrasound behind highly scattering layers of human skull and its application to 3D transcranial imaging via small-aperture planar phased arrays are reported. Due to its undulating, inhomogeneous, porous, and highly attenuative structure, human skull bone severely distorts ultrasonic beams produced by conventional focusing methods in both imaging and therapeutic applications. Strong acoustical mismatch between the skull and brain tissues, in addition to the skull's undulating topology across the active area of a planar ultrasonic probe, could cause multiple reflections and unpredictable refraction during beamforming and imaging processes. Such effects could significantly deflect the probe's beam from the intended focal point. Presented here is a theoretical basis and simulation results of an adaptive beamforming method that compensates for the latter effects in transmission mode, accompanied by experimental verification. The probe is a custom-designed 2 MHz, 256-element matrix array with 0.45 mm element size and 0.1mm kerf. Through its small footprint, it is possible to accurately measure the profile of the skull segment in contact with the probe and feed the results into our ray tracing program. The latter calculates the new time delay patterns adapted to the geometrical and acoustical properties of the skull phantom segment in contact with the probe. The time delay patterns correct for the refraction at the skull-brain boundary and bring the distorted beam back to its intended focus. The algorithms were implemented on the ultrasound open-platform ULA-OP (developed at the University of Florence).

  3. Development and demonstration of a novel computer planning solution for predefined correction of enophthalmos in anophthalmic patients using prebended 3D titanium-meshes--a technical note.

    PubMed

    Rana, Majeed; Essig, Harald; Rücker, Martin; Ruecker, Martin; Gellrich, Nils-Claudius

    2012-11-01

    Ablative surgery of the orbit is often associated with dramatic changes in facial geometry. Surgical intervention is often necessary to correct the functional and esthetic appearance in those patients who are anophthalmic, having an intact eyelid appearance and an orbital prosthesis. The outcome of the surgical correction depends on the shape of the orbital implants and their adequate placement. In the case of comparatively small rearrangements, the effect of implants on soft tissues can be estimated by surgeons on the basis of their experience. However, large deformities in complex cases (including large deformation of soft tissue or asymmetry) can be hardly predicted on the basis of simple empirical considerations. The purpose of the present technical note was to describe a new procedure of inverse design of customized orbital titanium meshes. To demonstrate this procedure, an anophthalmic patient with superior sulcus deformity and enophthalmos was enrolled. The volume and structure of the extraocular muscles, soft tissue, and bony structure of the orbital walls were examined using high-resolution multislice computed tomography. Next, a geometric model of the patient's anatomy was generated from the tomography data. Afterward, the orbital prosthesis was virtually relocated to a new position. Then, the desired correction of the particular soft tissue regions was performed using virtual sculpturing tools. Next, the deformation of the soft tissues and initial prosthesis boundaries were computed from the predefined displacements of the relocated tissue regions with the help of the Finite Element Method. The differential volume between the initial and designated position of the orbital prosthesis yielded the preferred implant shape required to effect the desired correction of soft tissue. During surgery, the preplanned position of the customized titanium meshes was guided using a navigation system. Although the inverse design of custom-tailored titanium meshes for

  4. Development and demonstration of a novel computer planning solution for predefined correction of enophthalmos in anophthalmic patients using prebended 3D titanium-meshes--a technical note.

    PubMed

    Rana, Majeed; Essig, Harald; Rücker, Martin; Ruecker, Martin; Gellrich, Nils-Claudius

    2012-11-01

    Ablative surgery of the orbit is often associated with dramatic changes in facial geometry. Surgical intervention is often necessary to correct the functional and esthetic appearance in those patients who are anophthalmic, having an intact eyelid appearance and an orbital prosthesis. The outcome of the surgical correction depends on the shape of the orbital implants and their adequate placement. In the case of comparatively small rearrangements, the effect of implants on soft tissues can be estimated by surgeons on the basis of their experience. However, large deformities in complex cases (including large deformation of soft tissue or asymmetry) can be hardly predicted on the basis of simple empirical considerations. The purpose of the present technical note was to describe a new procedure of inverse design of customized orbital titanium meshes. To demonstrate this procedure, an anophthalmic patient with superior sulcus deformity and enophthalmos was enrolled. The volume and structure of the extraocular muscles, soft tissue, and bony structure of the orbital walls were examined using high-resolution multislice computed tomography. Next, a geometric model of the patient's anatomy was generated from the tomography data. Afterward, the orbital prosthesis was virtually relocated to a new position. Then, the desired correction of the particular soft tissue regions was performed using virtual sculpturing tools. Next, the deformation of the soft tissues and initial prosthesis boundaries were computed from the predefined displacements of the relocated tissue regions with the help of the Finite Element Method. The differential volume between the initial and designated position of the orbital prosthesis yielded the preferred implant shape required to effect the desired correction of soft tissue. During surgery, the preplanned position of the customized titanium meshes was guided using a navigation system. Although the inverse design of custom-tailored titanium meshes for

  5. Polycaprolactone fiber meshes provide a 3D environment suitable for cultivation and differentiation of melanocytes from the outer root sheath of hair follicle.

    PubMed

    Savkovic, Vuk; Flämig, Franziska; Schneider, Marie; Sülflow, Katharina; Loth, Tina; Lohrenz, Andrea; Hacker, Michael Christian; Schulz-Siegmund, Michaela; Simon, Jan-Christoph

    2016-01-01

    Melanocytes differentiated from the stem cells of human hair follicle outer root sheath (ORS) have the potential for developing non-invasive treatments for skin disorders out of a minimal sample: of hair root. With a robust procedure for melanocyte cultivation from the ORS of human hair follicle at hand, this study focused on the identification of a suitable biocompatible, biodegradable carrier as the next step toward their clinical implementation. Polycaprolactone (PCL) is a known biocompatible material used for a number of medical devices. In this study, we have populated electrospun PCL fiber meshes with normal human epidermal melanocytes (NHEM) as well as with hair-follicle-derived human melanocytes from the outer root sheath (HUMORS) and tested their functionality in vitro. PCL fiber meshes evidently provided a niche for melanocytes and supported their melanotic properties. The cells were tested for gene expression of PAX3, PMEL, TYR and MITF, as well as for proliferation, expression of melanocyte marker proteins tyrosinase and glycoprotein 100 (gp100), L-DOPA-tautomerase enzymatic activity and melanin content. Reduced mitochondrial activity and PAX-3 gene expression indicated that the three-dimensional PCL scaffold supported differentiation rather than proliferation of melanocytes. The monitored melanotic features of both the NHEM and HUMORS cultivated on PCL scaffolds significantly exceeded those of two-dimensional adherent cultures. PMID:26126647

  6. Polycaprolactone fiber meshes provide a 3D environment suitable for cultivation and differentiation of melanocytes from the outer root sheath of hair follicle.

    PubMed

    Savkovic, Vuk; Flämig, Franziska; Schneider, Marie; Sülflow, Katharina; Loth, Tina; Lohrenz, Andrea; Hacker, Michael Christian; Schulz-Siegmund, Michaela; Simon, Jan-Christoph

    2016-01-01

    Melanocytes differentiated from the stem cells of human hair follicle outer root sheath (ORS) have the potential for developing non-invasive treatments for skin disorders out of a minimal sample: of hair root. With a robust procedure for melanocyte cultivation from the ORS of human hair follicle at hand, this study focused on the identification of a suitable biocompatible, biodegradable carrier as the next step toward their clinical implementation. Polycaprolactone (PCL) is a known biocompatible material used for a number of medical devices. In this study, we have populated electrospun PCL fiber meshes with normal human epidermal melanocytes (NHEM) as well as with hair-follicle-derived human melanocytes from the outer root sheath (HUMORS) and tested their functionality in vitro. PCL fiber meshes evidently provided a niche for melanocytes and supported their melanotic properties. The cells were tested for gene expression of PAX3, PMEL, TYR and MITF, as well as for proliferation, expression of melanocyte marker proteins tyrosinase and glycoprotein 100 (gp100), L-DOPA-tautomerase enzymatic activity and melanin content. Reduced mitochondrial activity and PAX-3 gene expression indicated that the three-dimensional PCL scaffold supported differentiation rather than proliferation of melanocytes. The monitored melanotic features of both the NHEM and HUMORS cultivated on PCL scaffolds significantly exceeded those of two-dimensional adherent cultures.

  7. Discontinuous finite element solution of the radiation diffusion equation on arbitrary polygonal meshes and locally adapted quadrilateral grids

    SciTech Connect

    Ragusa, Jean C.

    2015-01-01

    In this paper, we propose a piece-wise linear discontinuous (PWLD) finite element discretization of the diffusion equation for arbitrary polygonal meshes. It is based on the standard diffusion form and uses the symmetric interior penalty technique, which yields a symmetric positive definite linear system matrix. A preconditioned conjugate gradient algorithm is employed to solve the linear system. Piece-wise linear approximations also allow a straightforward implementation of local mesh adaptation by allowing unrefined cells to be interpreted as polygons with an increased number of vertices. Several test cases, taken from the literature on the discretization of the radiation diffusion equation, are presented: random, sinusoidal, Shestakov, and Z meshes are used. The last numerical example demonstrates the application of the PWLD discretization to adaptive mesh refinement.

  8. Computations of two- and three-dimensional flows using an adaptive mesh

    NASA Astrophysics Data System (ADS)

    Nakahashi, K.

    1985-11-01

    Two- and three-dimensional, steady and unsteady viscous flow fields are numerically simulated by solving the Navier-Stokes equations. A solution-adaptive-grid method is used to redistribute the grid points so as to improve the resolution of shock waves and shear layers without increasing the number of grid points. Flow fields considered include two-dimensional transonic flows about airfoils, two- and three-dimensional supersonic flow past an aerodynamic afterbody with a propulsive jet, supersonic flow over a blunt fin mounted on a wall, and supersonic flow over a bump. The computed results demonstrate a significant improvement in accuracy and quality of the solutions owing to the solution-adaptive mesh.

  9. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  10. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  11. Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

    NASA Astrophysics Data System (ADS)

    Wood, William Alfred, III

    production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

  12. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Goffin, Mark A.; Baker, Christopher M. J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k with directional dependence. General error estimators are derived for any given functional of the flux and applied to k to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  13. Minimising the error in eigenvalue calculations involving the Boltzmann transport equation using goal-based adaptivity on unstructured meshes

    SciTech Connect

    Goffin, Mark A.; Baker, Christopher M.J.; Buchan, Andrew G.; Pain, Christopher C.; Eaton, Matthew D.; Smith, Paul N.

    2013-06-01

    This article presents a method for goal-based anisotropic adaptive methods for the finite element method applied to the Boltzmann transport equation. The neutron multiplication factor, k{sub eff}, is used as the goal of the adaptive procedure. The anisotropic adaptive algorithm requires error measures for k{sub eff} with directional dependence. General error estimators are derived for any given functional of the flux and applied to k{sub eff} to acquire the driving force for the adaptive procedure. The error estimators require the solution of an appropriately formed dual equation. Forward and dual error indicators are calculated by weighting the Hessian of each solution with the dual and forward residual respectively. The Hessian is used as an approximation of the interpolation error in the solution which gives rise to the directional dependence. The two indicators are combined to form a single error metric that is used to adapt the finite element mesh. The residual is approximated using a novel technique arising from the sub-grid scale finite element discretisation. Two adaptive routes are demonstrated: (i) a single mesh is used to solve all energy groups, and (ii) a different mesh is used to solve each energy group. The second method aims to capture the benefit from representing the flux from each energy group on a specifically optimised mesh. The k{sub eff} goal-based adaptive method was applied to three examples which illustrate the superior accuracy in criticality problems that can be obtained.

  14. Image Quality and Radiation Dose of CT Coronary Angiography with Automatic Tube Current Modulation and Strong Adaptive Iterative Dose Reduction Three-Dimensional (AIDR3D)

    PubMed Central

    Shen, Hesong; Dai, Guochao; Luo, Mingyue; Duan, Chaijie; Cai, Wenli; Liang, Dan; Wang, Xinhua; Zhu, Dongyun; Li, Wenru; Qiu, Jianping

    2015-01-01

    Purpose To investigate image quality and radiation dose of CT coronary angiography (CTCA) scanned using automatic tube current modulation (ATCM) and reconstructed by strong adaptive iterative dose reduction three-dimensional (AIDR3D). Methods Eighty-four consecutive CTCA patients were collected for the study. All patients were scanned using ATCM and reconstructed with strong AIDR3D, standard AIDR3D and filtered back-projection (FBP) respectively. Two radiologists who were blinded to the patients' clinical data and reconstruction methods evaluated image quality. Quantitative image quality evaluation included image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). To evaluate image quality qualitatively, coronary artery is classified into 15 segments based on the modified guidelines of the American Heart Association. Qualitative image quality was evaluated using a 4-point scale. Radiation dose was calculated based on dose-length product. Results Compared with standard AIDR3D, strong AIDR3D had lower image noise, higher SNR and CNR, their differences were all statistically significant (P<0.05); compared with FBP, strong AIDR3D decreased image noise by 46.1%, increased SNR by 84.7%, and improved CNR by 82.2%, their differences were all statistically significant (P<0.05 or 0.001). Segments with diagnostic image quality for strong AIDR3D were 336 (100.0%), 486 (96.4%), and 394 (93.8%) in proximal, middle, and distal part respectively; whereas those for standard AIDR3D were 332 (98.8%), 472 (93.7%), 378 (90.0%), respectively; those for FBP were 217 (64.6%), 173 (34.3%), 114 (27.1%), respectively; total segments with diagnostic image quality in strong AIDR3D (1216, 96.5%) were higher than those of standard AIDR3D (1182, 93.8%) and FBP (504, 40.0%); the differences between strong AIDR3D and standard AIDR3D, strong AIDR3D and FBP were all statistically significant (P<0.05 or 0.001). The mean effective radiation dose was (2.55±1.21) mSv. Conclusion

  15. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, λc=800nm, ▵λ=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with λc=809nm and ▵λ=81nm (2.6 μm nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 μm root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44

  16. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

  17. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

  18. Relativistic magnetohydrodynamics in dynamical spacetimes: A new adaptive mesh refinement implementation

    SciTech Connect

    Etienne, Zachariah B.; Liu, Yuk Tung; Shapiro, Stuart L.

    2010-10-15

    We have written and tested a new general relativistic magnetohydrodynamics code, capable of evolving magnetohydrodynamics (MHD) fluids in dynamical spacetimes with adaptive-mesh refinement (AMR). Our code solves the Einstein-Maxwell-MHD system of coupled equations in full 3+1 dimensions, evolving the metric via the Baumgarte-Shapiro Shibata-Nakamura formalism and the MHD and magnetic induction equations via a conservative, high-resolution shock-capturing scheme. The induction equations are recast as an evolution equation for the magnetic vector potential, which exists on a grid that is staggered with respect to the hydrodynamic and metric variables. The divergenceless constraint {nabla}{center_dot}B=0 is enforced by the curl of the vector potential. Our MHD scheme is fully compatible with AMR, so that fluids at AMR refinement boundaries maintain {nabla}{center_dot}B=0. In simulations with uniform grid spacing, our MHD scheme is numerically equivalent to a commonly used, staggered-mesh constrained-transport scheme. We present code validation test results, both in Minkowski and curved spacetimes. They include magnetized shocks, nonlinear Alfven waves, cylindrical explosions, cylindrical rotating disks, magnetized Bondi tests, and the collapse of a magnetized rotating star. Some of the more stringent tests involve black holes. We find good agreement between analytic and numerical solutions in these tests, and achieve convergence at the expected order.

  19. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique.

    PubMed

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  20. Efficient low-bit-rate adaptive mesh-based motion compensation technique

    NASA Astrophysics Data System (ADS)

    Mahmoud, Hanan A.; Bayoumi, Magdy A.

    2001-08-01

    This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).

  1. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  2. The Impact of Different Levels of Adaptive Iterative Dose Reduction 3D on Image Quality of 320-Row Coronary CT Angiography: A Clinical Trial

    PubMed Central

    Feger, Sarah; Rief, Matthias; Zimmermann, Elke; Martus, Peter; Schuijf, Joanne Désirée; Blobel, Jörg; Richter, Felicitas; Dewey, Marc

    2015-01-01

    Purpose The aim of this study was the systematic image quality evaluation of coronary CT angiography (CTA), reconstructed with the 3 different levels of adaptive iterative dose reduction (AIDR 3D) and compared to filtered back projection (FBP) with quantum denoising software (QDS). Methods Standard-dose CTA raw data of 30 patients with mean radiation dose of 3.2 ± 2.6 mSv were reconstructed using AIDR 3D mild, standard, strong and compared to FBP/QDS. Objective image quality comparison (signal, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), contour sharpness) was performed using 21 measurement points per patient, including measurements in each coronary artery from proximal to distal. Results Objective image quality parameters improved with increasing levels of AIDR 3D. Noise was lowest in AIDR 3D strong (p≤0.001 at 20/21 measurement points; compared with FBP/QDS). Signal and contour sharpness analysis showed no significant difference between the reconstruction algorithms for most measurement points. Best coronary SNR and CNR were achieved with AIDR 3D strong. No loss of SNR or CNR in distal segments was seen with AIDR 3D as compared to FBP. Conclusions On standard-dose coronary CTA images, AIDR 3D strong showed higher objective image quality than FBP/QDS without reducing contour sharpness. Trial Registration Clinicaltrials.gov NCT00967876 PMID:25945924

  3. Multifluid adaptive-mesh simulation of the solar wind interaction with the local interstellar medium

    SciTech Connect

    Kryukov, I. A.; Borovikov, S. N.; Pogorelov, N. V.; Zank, G. P.

    2006-09-26

    DOE's SciDAC adaptive mesh refinement code Chombo has been modified for solution of compressible MHD flows with the application of high resolution, shock-capturing numerical schemes. The code developed is further extended to involve multiple fluids and applied to the problem of the solar wind interaction with the local interstellar medium. For this purpose, a set of MHD equations is solved together with a few sets of the Euler gas dynamics equations, depending on the number of neutral fluids included in the model. Our first results are presented that were obtained in the framework of an axially symmetric multifluid model which is applicable to magnetic-field-aligned flows. Details are shown of the generation and development of Rayleigh-Taylor and Kelvin-Helmholtz instabilities of the heliopause. A comparison is given of the results obtained with a two- and four-fluid models.

  4. Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow

    SciTech Connect

    Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2011-06-22

    In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.

  5. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  6. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    NASA Astrophysics Data System (ADS)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  7. Numerical Relativistic Magnetohydrodynamics with ADER Discontinuous Galerkin methods on adaptively refined meshes.

    NASA Astrophysics Data System (ADS)

    Zanotti, O.; Dumbser, M.; Fambri, F.

    2016-05-01

    We describe a new method for the solution of the ideal MHD equations in special relativity which adopts the following strategy: (i) the main scheme is based on Discontinuous Galerkin (DG) methods, allowing for an arbitrary accuracy of order N+1, where N is the degree of the basis polynomials; (ii) in order to cope with oscillations at discontinuities, an ”a-posteriori” sub-cell limiter is activated, which scatters the DG polynomials of the previous time-step onto a set of 2N+1 sub-cells, over which the solution is recomputed by means of a robust finite volume scheme; (iii) a local spacetime Discontinuous-Galerkin predictor is applied both on the main grid of the DG scheme and on the sub-grid of the finite volume scheme; (iv) adaptive mesh refinement (AMR) with local time-stepping is used. We validate the new scheme and comment on its potential applications in high energy astrophysics.

  8. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  9. Goal functional evaluations for phase-field fracture using PU-based DWR mesh adaptivity

    NASA Astrophysics Data System (ADS)

    Wick, Thomas

    2016-06-01

    In this study, a posteriori error estimation and goal-oriented mesh adaptivity are developed for phase-field fracture propagation. Goal functionals are computed with the dual-weighted residual (DWR) method, which is realized by a recently introduced novel localization technique based on a partition-of-unity (PU). This technique is straightforward to apply since the weak residual is used. The influence of neighboring cells is gathered by the PU. Consequently, neither strong residuals nor jumps over element edges are required. Therefore, this approach facilitates the application of the DWR method to coupled (nonlinear) multiphysics problems such as fracture propagation. These developments then allow for a systematic investigation of the discretization error for certain quantities of interest. Specifically, our focus on the relationship between the phase-field regularization and the spatial discretization parameter in terms of goal functional evaluations is novel.

  10. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  11. Feasibility of electrical impedance tomography in haemorrhagic stroke treatment using adaptive mesh

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, J.; Anderson, C.; Jin, C.; van Schaik, A.; Holder, D.; McEwan, A.

    2010-04-01

    EIT has been proposed for acute stroke differentiation, specifically to determine the type of stroke, either ischaemia (clot) or haemorrhage (bleed) to allow the rapid use of clot-busting drugs in the former (Romsauerova et al 2006) . This addresses an important medical need, although there is little treatment offered in the case of haemorrhage. Also the demands on EIT are high with usually no availability to take a 'before' measurement, ruling out time difference imaging. Recently a new treatment option for haemorrhage has been proposed and is being studied in international randomised controlled trial: the early reduction of elevated blood pressure to attenuate the haematoma. This has been shown via CT to reduce bleeds by up to 1mL by Anderson et al 2008. The use of EIT as a continuous measure is desirable here to monitor the effect of blood pressure reduction. A 1mL increase of haemorrhagic lesion located near scalp on the right side of head caused a boundary voltage change of less than 0.05% at 50 kHz. This could be visually observed in a time difference 3D reconstruction with no change in electrode positions, mesh, background conductivity or drift when baseline noise was less than 0.005% but not when noise was increased to 0.01%. This useful result informs us that the EIT system must have noise of less than 0.005% at 50 kHz including instrumentation, physiological and other biases.

  12. Efficient global wave propagation adapted to 3-D structural complexity: a pseudo-spectral/spectral-element approach

    NASA Astrophysics Data System (ADS)

    Leng, Kuangdai; Nissen-Meyer, Tarje; van Driel, Martin

    2016-09-01

    We present a new, computationally efficient numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. We characterize the azimuthal dependence of 3-D wavefields in terms of Fourier series, such that the 3-D equations of motion reduce to an algebraic system of coupled 2-D meridian equations, which is then solved by a 2-D spectral element method (SEM). Computational efficiency of such a hybrid method stems from lateral smoothness of 3-D Earth models and axial singularity of seismic point sources, which jointly confine the Fourier modes of wavefields to a few lower orders. We show novel benchmarks for global wave solutions in 3-D structures between our method and an independent, fully discretized 3-D SEM with remarkable agreement. Performance comparisons are carried out on three state-of-the-art tomography models, with seismic period ranging from 34s down to 11s. It turns out that our method has run up to two orders of magnitude faster than the 3-D SEM, featured by a computational advantage expanding with seismic frequency.

  13. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  14. Influence of electrospun fiber mesh size on hMSC oxygen metabolism in 3D collagen matrices: experimental and theoretical evidences.

    PubMed

    Guaccio, Angela; Guarino, Vincenzo; Perez, Marco A Alvarez-; Cirillo, Valentina; Netti, Paolo A; Ambrosio, Luigi

    2011-08-01

    The traditional paradigm of tissue engineering of regenerating in vitro tissue or organs, through the combination of an artificial matrix and a cellular population has progressively changed direction. The most recent concept is the realization of a fully functional biohybrid, where both, the artificial and the biotic phase, concur in the formation of the novel organic matter. In this direction, interest is growing in approaches taking advantage of the control at micro- and nano-scale of cell material interaction based on the realization of elementary tassels of cells and materials which constitute the beginning point for the expansion of 3D more complex structures. Since a spontaneous assembly of all these components is expected, however, it becomes more fundamental than ever to define the features influencing cellular behavior, either they were material functional properties, or material architecture. In this work, it has been investigated the direct effect of electrospun fiber sizes on oxygen metabolism of h-MSC cells, when any other culture parameter was kept constant. To this aim, thin PCL electrospun membranes, with micro- and nano-scale texturing, were layered between two collagen slices up to create a sandwich structure (µC-PCL-C and nC-PCL-C). Cells were seeded on membranes, and the oxygen consumption was determined by a phosphorescence quenching technique. Results indicate a strong effect of the architecture of scaffolds on cell metabolism, also revealed by the increasing of HIF1-α gene expression in nC-PCL-C. These findings offer new insights into the role of materials in specific cell activities, also implying the existence of very interesting criteria for the control of tissue growth through the tuning of scaffold architecture.

  15. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  16. Level-by-level artificial viscosity and visualization for MHD simulation with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Hatori, Tomoharu; Ito, Atsushi M.; Nunami, Masanori; Usui, Hideyuki; Miura, Hideaki

    2016-08-01

    We propose a numerical method to determine the artificial viscosity in magnetohydrodynamics (MHD) simulations with adaptive mesh refinement (AMR) method, where the artificial viscosity is adaptively changed due to the resolution level of the AMR hierarchy. Although the suitable value of the artificial viscosity depends on the governing equations and the model of target problem, it can be determined by von Neumann stability analysis. By means of the new method, "level-by-level artificial viscosity method," MHD simulations of Rayleigh-Taylor instability (RTI) are carried out with the AMR method. The validity of the level-by-level artificial viscosity method is confirmed by the comparison of the linear growth rates of RTI between the AMR simulations and the simple simulations with uniform grid and uniform artificial viscosity whose resolution is the same as that in the highest level of the AMR simulation. Moreover, in the nonlinear phase of RTI, the secondary instability is clearly observed where the hierarchical data structure of AMR calculation is visualized as high resolution region floats up like terraced fields. In the applications of the method to general fluid simulations, the growth of small structures can be sufficiently reproduced, while the divergence of numerical solutions can be suppressed.

  17. Dynamically adaptive mesh refinement technique for image reconstruction in optical tomography.

    PubMed

    Soloviev, Vadim Y; Krasnosselskaia, Lada V

    2006-04-20

    A novel adaptive mesh technique is introduced for problems of image reconstruction in luminescence optical tomography. A dynamical adaptation of the three-dimensional scheme based on the finite-volume formulation reduces computational time and balances the ill-posed nature of the inverse problem. The arbitrary shape of the bounding surface is handled by an additional refinement of computational cells on the boundary. Dynamical shrinking of the search volume is introduced to improve computational performance and accuracy while locating the luminescence target. Light propagation in the medium is modeled by the telegraph equation, and the image-reconstruction algorithm is derived from the Fredholm integral equation of the first kind. Stability and computational efficiency of the introduced method are demonstrated for image reconstruction of one and two spherical luminescent objects embedded within a breastlike tissue phantom. Experimental measurements are simulated by the solution of the forward problem on a grid of 5x5 light guides attached to the surface of the phantom.

  18. Performance Characteristics of an Adaptive Mesh RefinementCalculation on Scalar and Vector Platforms

    SciTech Connect

    Welcome, Michael; Rendleman, Charles; Oliker, Leonid; Biswas, Rupak

    2006-01-31

    Adaptive mesh refinement (AMR) is a powerful technique thatreduces the resources necessary to solve otherwise in-tractable problemsin computational science. The AMR strategy solves the problem on arelatively coarse grid, and dynamically refines it in regions requiringhigher resolution. However, AMR codes tend to be far more complicatedthan their uniform grid counterparts due to the software infrastructurenecessary to dynamically manage the hierarchical grid framework. Despitethis complexity, it is generally believed that future multi-scaleapplications will increasingly rely on adaptive methods to study problemsat unprecedented scale and resolution. Recently, a new generation ofparallel-vector architectures have become available that promise toachieve extremely high sustained performance for a wide range ofapplications, and are the foundation of many leadership-class computingsystems worldwide. It is therefore imperative to understand the tradeoffsbetween conventional scalar and parallel-vector platforms for solvingAMR-based calculations. In this paper, we examine the HyperCLaw AMRframework to compare and contrast performance on the Cray X1E, IBM Power3and Power5, and SGI Altix. To the best of our knowledge, this is thefirst work that investigates and characterizes the performance of an AMRcalculation on modern parallel-vector systems.

  19. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  20. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  1. Expectation maximization SPECT reconstruction with a content-adaptive singularity-based mesh-domain image model

    NASA Astrophysics Data System (ADS)

    Lu, Yao; Ye, Hongwei; Xu, Yuesheng; Hu, Xiaofei; Vogelsang, Levon; Shen, Lixin; Feiglin, David; Lipson, Edward; Krol, Andrzej

    2008-03-01

    To improve the speed and quality of ordered-subsets expectation-maximization (OSEM) SPECT reconstruction, we have implemented a content-adaptive, singularity-based, mesh-domain, image model (CASMIM) with an accurate algorithm for estimation of the mesh-domain system matrix. A preliminary image, used to initialize CASMIM reconstruction, was obtained using pixel-domain OSEM. The mesh-domain representation of the image was produced by a 2D wavelet transform followed by Delaunay triangulation to obtain joint estimation of nodal locations and their activity values. A system matrix with attenuation compensation was investigated. Digital chest phantom SPECT was simulated and reconstructed. The quality of images reconstructed with OSEM-CASMIM is comparable to that from pixel-domain OSEM, but images are obtained five times faster by the CASMIM method.

  2. A new physical model with multilayer architecture for facial expression animation using dynamic adaptive mesh.

    PubMed

    Zhang, Yu; Prakash, Edmond C; Sung, Eric

    2004-01-01

    This paper presents a new physically-based 3D facial model based on anatomical knowledge which provides high fidelity for facial expression animation while optimizing the computation. Our facial model has a multilayer biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators, and underlying skull structure. In contrast to existing mass-spring-damper (MSD) facial models, our dynamic skin model uses the nonlinear springs to directly simulate the nonlinear visco-elastic behavior of soft tissue and a new kind of edge repulsion spring is developed to prevent collapse of the skin model. Different types of muscle models have been developed to simulate distribution of the muscle force applied on the skin due to muscle contraction. The presence of the skull advantageously constrain the skin movements, resulting in more accurate facial deformation and also guides the interactive placement of facial muscles. The governing dynamics are computed using a local semi-implicit ODE solver. In the dynamic simulation, an adaptive refinement automatically adapts the local resolution at which potential inaccuracies are detected depending on local deformation. The method, in effect, ensures the required speedup by concentrating computational time only where needed while ensuring realistic behavior within a predefined error threshold. This mechanism allows more pleasing animation results to be produced at a reduced computational cost.

  3. Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics

    SciTech Connect

    Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park

    2007-04-02

    Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.

  4. MASS AND MAGNETIC DISTRIBUTIONS IN SELF-GRAVITATING SUPER-ALFVENIC TURBULENCE WITH ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao

    2011-04-10

    In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.

  5. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

    SciTech Connect

    Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

    2011-03-15

    The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

  6. ADAPTIVE MESH REFINEMENT SIMULATIONS OF GALAXY FORMATION: EXPLORING NUMERICAL AND PHYSICAL PARAMETERS

    SciTech Connect

    Hummels, Cameron B.; Bryan, Greg L.

    2012-04-20

    We carry out adaptive mesh refinement cosmological simulations of Milky Way mass halos in order to investigate the formation of disk-like galaxies in a {Lambda}-dominated cold dark matter model. We evolve a suite of five halos to z = 0 and find a gas disk formation in each; however, in agreement with previous smoothed particle hydrodynamics simulations (that did not include a subgrid feedback model), the rotation curves of all halos are centrally peaked due to a massive spheroidal component. Our standard model includes radiative cooling and star formation, but no feedback. We further investigate this angular momentum problem by systematically modifying various simulation parameters including: (1) spatial resolution, ranging from 1700 to 212 pc; (2) an additional pressure component to ensure that the Jeans length is always resolved; (3) low star formation efficiency, going down to 0.1%; (4) fixed physical resolution as opposed to comoving resolution; (5) a supernova feedback model that injects thermal energy to the local cell; and (6) a subgrid feedback model which suppresses cooling in the immediate vicinity of a star formation event. Of all of these, we find that only the last (cooling suppression) has any impact on the massive spheroidal component. In particular, a simulation with cooling suppression and feedback results in a rotation curve that, while still peaked, is considerably reduced from our standard runs.

  7. HIGH-RESOLUTION SIMULATIONS OF CONVECTION PRECEDING IGNITION IN TYPE Ia SUPERNOVAE USING ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Nonaka, A.; Aspden, A. J.; Almgren, A. S.; Bell, J. B.; Zingale, M.; Woosley, S. E.

    2012-01-20

    We extend our previous three-dimensional, full-star simulations of the final hours of convection preceding ignition in Type Ia supernovae to higher resolution using the adaptive mesh refinement capability of our low Mach number code, MAESTRO. We report the statistics of the ignition of the first flame at an effective 4.34 km resolution and general flow field properties at an effective 2.17 km resolution. We find that off-center ignition is likely, with radius of 50 km most favored and a likely range of 40-75 km. This is consistent with our previous coarser (8.68 km resolution) simulations, implying that we have achieved sufficient resolution in our determination of likely ignition radii. The dynamics of the last few hot spots preceding ignition suggest that a multiple ignition scenario is not likely. With improved resolution, we can more clearly see the general flow pattern in the convective region, characterized by a strong outward plume with a lower speed recirculation. We show that the convective core is turbulent with a Kolmogorov spectrum and has a lower turbulent intensity and larger integral length scale than previously thought (on the order of 16 km s{sup -1} and 200 km, respectively), and we discuss the potential consequences for the first flames.

  8. GALAXY CLUSTER RADIO RELICS IN ADAPTIVE MESH REFINEMENT COSMOLOGICAL SIMULATIONS: RELIC PROPERTIES AND SCALING RELATIONSHIPS

    SciTech Connect

    Skillman, Samuel W.; Hallman, Eric J.; Burns, Jack O.; Smith, Britton D.; O'Shea, Brian W.; Turk, Matthew J.

    2011-07-10

    Cosmological shocks are a critical part of large-scale structure formation, and are responsible for heating the intracluster medium in galaxy clusters. In addition, they are capable of accelerating non-thermal electrons and protons. In this work, we focus on the acceleration of electrons at shock fronts, which is thought to be responsible for radio relics-extended radio features in the vicinity of merging galaxy clusters. By combining high-resolution adaptive mesh refinement/N-body cosmological simulations with an accurate shock-finding algorithm and a model for electron acceleration, we calculate the expected synchrotron emission resulting from cosmological structure formation. We produce synthetic radio maps of a large sample of galaxy clusters and present luminosity functions and scaling relationships. With upcoming long-wavelength radio telescopes, we expect to see an abundance of radio emission associated with merger shocks in the intracluster medium. By producing observationally motivated statistics, we provide predictions that can be compared with observations to further improve our understanding of magnetic fields and electron shock acceleration.

  9. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  10. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  11. An adaptive grid method for computing the high speed 3D viscous flow about a re-entry vehicle

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Smith, Robert E.

    1992-01-01

    An algebraic solution adaptive grid generation method that allows adapting the grid in all three coordinate directions is presented. Techniques are described that maintain the integrity of the original vehicle definition for grid point movement on the vehicle surface and that avoid grid cross over in the boundary layer portion of the grid lying next to the vehicle surface. The adaptive method is tested by computing the Mach 6 hypersonic three dimensional viscous flow about a proposed Martian entry vehicle.

  12. Adaptation of the three-dimensional wisdom scale (3D-WS) for the Korean cultural context.

    PubMed

    Kim, Seungyoun; Knight, Bob G

    2014-10-23

    ABSTRACT Background: Previous research on wisdom has suggested that wisdom is comprised of cognitive, reflective, and affective components and has developed and validated wisdom measures based on samples from Western countries. To apply the measurement to Eastern cultures, the present study revised an existing wisdom scale, the three-dimensional wisdom scale (3D-WS, Ardelt, 2003) for the Korean cultural context. Methods: Participants included 189 Korean heritage adults (age range 19-96) living in Los Angeles. We added a culturally specific factor of wisdom to the 3D-WS: Modesty and Unobtrusiveness (Yang, 2001), which captures an Eastern aspect of wisdom. The structure and psychometrics of the scale were tested. By latent cluster analysis, we determined acculturation subgroups and examined group differences in the means of factors in the revised wisdom scale (3D-WS-K). Results: Three factors, Cognitive Flexibility, Viewpoint Relativism, and Empathic Modesty were found using confirmatory factor analysis. Respondents with high biculturalism were higher on Viewpoint Relativism and lower on Empathic Modesty. Conclusion: This study discovered that a revised wisdom scale had a distinct factor structure and item content in a Korean heritage sample. We also found acculturation influences on the meaning of wisdom.

  13. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  14. An adaptive computation mesh for the solution of singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Brackbill, J. U.; Saltzman, J.

    1980-01-01

    In singular perturbation problems, control of zone size variation can affect the effort required to obtain accurate, numerical solutions of finite difference equations. The mesh is generated by the solution of potential equations. Numerical results for a singular perturbation problem in two dimensions are presented. The mesh was used in calculations of resistive magnetohydrodynamic flow in two dimensions.

  15. Temporal-adaptive Euler/Navier-Stokes algorithm for unsteady aerodynamic analysis of airfoils using unstructured dynamic meshes

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Batina, John T.; Williams, Marc H.

    1990-01-01

    A temporal adaptive algorithm for the time-integration of the two-dimensional Euler or Navier-Stokes equations is presented. The flow solver involves an upwind flux-split spatial discretization for the convective terms and central differencing for the shear-stress and heat flux terms on an unstructured mesh of triangles. The temporal adaptive algorithm is a time-accurate integration procedure which allows flows with high spatial and temporal gradients to be computed efficiently by advancing each grid cell near its maximum allowable time step. Results indicate that an appreciable computational savings can be achieved for both inviscid and viscous unsteady airfoil problems using unstructured meshes without degrading spatial or temporal accuracy.

  16. Higher-order conservative interpolation between control-volume meshes: Application to advection and multiphase flow problems with dynamic mesh adaptivity

    NASA Astrophysics Data System (ADS)

    Adam, A.; Pavlidis, D.; Percival, J. R.; Salinas, P.; Xie, Z.; Fang, F.; Pain, C. C.; Muggeridge, A. H.; Jackson, M. D.

    2016-09-01

    A general, higher-order, conservative and bounded interpolation for the dynamic and adaptive meshing of control-volume fields dual to continuous and discontinuous finite element representations is presented. Existing techniques such as node-wise interpolation are not conservative and do not readily generalise to discontinuous fields, whilst conservative methods such as Grandy interpolation are often too diffusive. The new method uses control-volume Galerkin projection to interpolate between control-volume fields. Bounded solutions are ensured by using a post-interpolation diffusive correction. Example applications of the method to interface capturing during advection and also to the modelling of multiphase porous media flow are presented to demonstrate the generality and robustness of the approach.

  17. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  18. Numerical Modelling of Volcanic Ash Settling in Water Using Adaptive Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R.

    2011-12-01

    At the bottom of the world's oceans lies layer after layer of ash deposited from past volcanic eruptions. Correct interpretation of these layers can provide important constraints on the duration and frequency of volcanism, but requires a full understanding of the complex multi-phase settling and deposition process. Analogue experiments of tephra settling through a tank of water demonstrate that small ash particles can either settle individually, or collectively as a gravitationally unstable ash-laden plume. These plumes are generated when the concentration of particles exceeds a certain threshold such that the density of the tephra-water mixture is sufficiently large relative to the underlying particle-free water for a gravitational Rayleigh-Taylor instability to develop. These ash-laden plumes are observed to descend as a vertical density current at a velocity much greater than that of single particles, which has important implications for the emplacement of tephra deposits on the seabed. To extend the results of laboratory experiments to large scales and explore the conditions under which vertical density currents may form and persist, we have developed a multi-phase extension to Fluidity, a combined finite element / control volume CFD code that uses adaptive unstructured meshes. As a model validation, we present two- and three-dimensional simulations of tephra plume formation in a water tank that replicate laboratory experiments (Carey, 1997, doi:10.1130/0091-7613(1997)025<0839:IOCSOT>2.3.CO;2). An inflow boundary condition at the top of the domain allows particles to flux in at a constant rate of 0.472 gm-2s-1, forming a near-surface layer of tephra particles, which initially settle individually at the predicted Stokes velocity of 1.7 mms-1. As more tephra enters the water and the particle concentration increases, the layer eventually becomes unstable and plumes begin to form, descending with velocities more than ten times greater than those of individual

  19. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  20. Line relaxation methods for the solution of 2D and 3D compressible flows

    NASA Technical Reports Server (NTRS)

    Hassan, O.; Probert, E. J.; Morgan, K.; Peraire, J.

    1993-01-01

    An implicit finite element based algorithm for the compressible Navier-Stokes equations is outlined, and the solution of the resulting equation by a line relaxation on general meshes of triangles or tetrahedra is described. The problem of generating and adapting unstructured meshes for viscous flows is reexamined, and an approach for both 2D and 3D simulations is proposed. An efficient approach appears to be the use of an implicit/explicit procedure, with the implicit treatment being restricted to those regions of the mesh where viscous effects are known to be dominant. Numerical examples demonstrating the computational performance of the proposed techniques are given.

  1. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  2. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  3. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  4. Cross-axis adaptation improves 3D vestibulo-ocular reflex alignment during chronic stimulation via a head-mounted multichannel vestibular prosthesis.

    PubMed

    Dai, Chenkai; Fridman, Gene Y; Chiang, Bryce; Davidovics, Natan S; Melvin, Thuy-Anh; Cullen, Kathleen E; Della Santina, Charles C

    2011-05-01

    By sensing three-dimensional (3D) head rotation and electrically stimulating the three ampullary branches of a vestibular nerve to encode head angular velocity, a multichannel vestibular prosthesis (MVP) can restore vestibular sensation to individuals disabled by loss of vestibular hair cell function. However, current spread to afferent fibers innervating non-targeted canals and otolith end organs can distort the vestibular nerve activation pattern, causing misalignment between the perceived and actual axis of head rotation. We hypothesized that over time, central neural mechanisms can adapt to correct this misalignment. To test this, we rendered five chinchillas vestibular deficient via bilateral gentamicin treatment and unilaterally implanted them with a head-mounted MVP. Comparison of 3D angular vestibulo-ocular reflex (aVOR) responses during 2 Hz, 50°/s peak horizontal sinusoidal head rotations in darkness on the first, third, and seventh days of continual MVP use revealed that eye responses about the intended axis remained stable (at about 70% of the normal gain) while misalignment improved significantly by the end of 1 week of prosthetic stimulation. A comparable time course of improvement was also observed for head rotations about the other two semicircular canal axes and at every stimulus frequency examined (0.2-5 Hz). In addition, the extent of disconjugacy between the two eyes progressively improved during the same time window. These results indicate that the central nervous system rapidly adapts to multichannel prosthetic vestibular stimulation to markedly improve 3D aVOR alignment within the first week after activation. Similar adaptive improvements are likely to occur in other species, including humans.

  5. Interface Reconstruction in Two-and Three-Dimensional Arbitrary Lagrangian-Euderian Adaptive Mesh Refinement Simulations

    SciTech Connect

    Masters, N D; Anderson, R W; Elliott, N S; Fisher, A C; Gunney, B T; Koniges, A E

    2007-08-28

    Modeling of high power laser and ignition facilities requires new techniques because of the higher energies and higher operational costs. We report on the development and application of a new interface reconstruction algorithm for chamber modeling code that combines ALE (Arbitrary Lagrangian Eulerian) techniques with AMR (Adaptive Mesh Refinement). The code is used for the simulation of complex target elements in the National Ignition Facility (NIF) and other similar facilities. The interface reconstruction scheme is required to adequately describe the debris/shrapnel (including fragments or droplets) resulting from energized materials that could affect optics or diagnostic sensors. Traditional ICF modeling codes that choose to implement ALE + AMR techniques will also benefit from this new scheme. The ALE formulation requires material interfaces (including those of generated particles or droplets) to be tracked. We present the interface reconstruction scheme developed for NIF's ALE-AMR and discuss how it is affected by adaptive mesh refinement and the ALE mesh. Results of the code are shown for NIF and OMEGA target configurations.

  6. Large Eddy simulation of compressible flows with a low-numerical dissipation patch-based adaptive mesh refinement method

    NASA Astrophysics Data System (ADS)

    Pantano, Carlos

    2005-11-01

    We describe a hybrid finite difference method for large-eddy simulation (LES) of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). Numerical experiments and validation calculations are presented including a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability. The approach is a conservative flux-based SAMR formulation and as such, it utilizes refinement to computational advantage. The numerical method for the resolved scale terms encompasses the cases of scheme alternation and internal mesh interfaces resulting from SAMR. An explicit centered scheme that is consistent with a skew-symmetric finite difference formulation is used in turbulent flow regions while a weighted essentially non-oscillatory (WENO) scheme is employed to capture shocks. The subgrid stresses and transports are calculated by means of the streched-vortex model, Misra & Pullin (1997)

  7. A low numerical dissipation patch-based adaptive mesh refinement method for large-eddy simulation of compressible flows

    NASA Astrophysics Data System (ADS)

    Pantano, C.; Deiterding, R.; Hill, D. J.; Pullin, D. I.

    2007-01-01

    We present a methodology for the large-eddy simulation of compressible flows with a low-numerical dissipation scheme and structured adaptive mesh refinement (SAMR). A description of a conservative, flux-based hybrid numerical method that uses both centered finite-difference and a weighted essentially non-oscillatory (WENO) scheme is given, encompassing the cases of scheme alternation and internal mesh interfaces resulting from SAMR. In this method, the centered scheme is used in turbulent flow regions while WENO is employed to capture shocks. One-, two- and three-dimensional numerical experiments and example simulations are presented including homogeneous shock-free turbulence, a turbulent jet and the strongly shock-driven mixing of a Richtmyer-Meshkov instability.

  8. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  9. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing. PMID:24808080

  10. Adaptive Mesh Refinement Cosmological Simulations of Cosmic Rays in Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Skillman, Samuel William

    2013-12-01

    Galaxy clusters are unique astrophysical laboratories that contain many thermal and non-thermal phenomena. In particular, they are hosts to cosmic shocks, which propagate through the intracluster medium as a by-product of structure formation. It is believed that at these shock fronts, magnetic field inhomogeneities in a compressing flow may lead to the acceleration of cosmic ray electrons and ions. These relativistic particles decay and radiate through a variety of mechanisms, and have observational signatures in radio, hard X-ray, and Gamma-ray wavelengths. We begin this dissertation by developing a method to find shocks in cosmological adaptive mesh refinement simulations of structure formation. After describing the evolution of shock properties through cosmic time, we make estimates for the amount of kinetic energy processed and the total number of cosmic ray protons that could be accelerated at these shocks. We then use this method of shock finding and a model for the acceleration of and radio synchrotron emission from cosmic ray electrons to estimate the radio emission properties in large scale structures. By examining the time-evolution of the radio emission with respect to the X-ray emission during a galaxy cluster merger, we find that the relative timing of the enhancements in each are important consequences of the shock dynamics. By calculating the radio emission expected from a given mass galaxy cluster, we make estimates for future large-area radio surveys. Next, we use a state-of-the-art magnetohydrodynamic simulation to follow the electron acceleration in a massive merging galaxy cluster. We use the magnetic field information to calculate not only the total radio emission, but also create radio polarization maps that are compared to recent observations. We find that we can naturally reproduce Mpc-scale radio emission that resemble many of the known double radio relic systems. Finally, motivated by our previous studies, we develop and introduce a

  11. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  12. Potentially singular solutions of the 3D axisymmetric Euler equations

    PubMed Central

    Luo, Guo; Hou, Thomas Y.

    2014-01-01

    The question of finite-time blowup of the 3D incompressible Euler equations is numerically investigated in a periodic cylinder with solid boundaries. Using rotational symmetry, the equations are discretized in the (2D) meridian plane on an adaptive (moving) mesh and is integrated in time with adaptively chosen time steps. The vorticity is observed to develop a ring-singularity on the solid boundary with a growth proportional to ∼(ts − t)−2.46, where ts ∼ 0.0035056 is the estimated singularity time. A local analysis also suggests the existence of a self-similar blowup. The simulations stop at τ2 = 0.003505 at which time the vorticity amplifies by more than (3 × 108)-fold and the maximum mesh resolution exceeds (3 × 1012)2. The vorticity vector is observed to maintain four significant digits throughout the computations. PMID:25157172

  13. Potentially singular solutions of the 3D axisymmetric Euler equations.

    PubMed

    Luo, Guo; Hou, Thomas Y

    2014-09-01

    The question of finite-time blowup of the 3D incompressible Euler equations is numerically investigated in a periodic cylinder with solid boundaries. Using rotational symmetry, the equations are discretized in the (2D) meridian plane on an adaptive (moving) mesh and is integrated in time with adaptively chosen time steps. The vorticity is observed to develop a ring-singularity on the solid boundary with a growth proportional to ∼(ts - t)(-2.46), where ts ∼ 0.0035056 is the estimated singularity time. A local analysis also suggests the existence of a self-similar blowup. The simulations stop at τ(2) = 0.003505 at which time the vorticity amplifies by more than (3 × 10(8))-fold and the maximum mesh resolution exceeds (3 × 10(12))(2). The vorticity vector is observed to maintain four significant digits throughout the computations.

  14. Mesh adaptation on the sphere using optimal transport and the numerical solution of a Monge-Ampère type equation

    NASA Astrophysics Data System (ADS)

    Weller, Hilary; Browne, Philip; Budd, Chris; Cullen, Mike

    2016-03-01

    An equation of Monge-Ampère type has, for the first time, been solved numerically on the surface of the sphere in order to generate optimally transported (OT) meshes, equidistributed with respect to a monitor function. Optimal transport generates meshes that keep the same connectivity as the original mesh, making them suitable for r-adaptive simulations, in which the equations of motion can be solved in a moving frame of reference in order to avoid mapping the solution between old and new meshes and to avoid load balancing problems on parallel computers. The semi-implicit solution of the Monge-Ampère type equation involves a new linearisation of the Hessian term, and exponential maps are used to map from old to new meshes on the sphere. The determinant of the Hessian is evaluated as the change in volume between old and new mesh cells, rather than using numerical approximations to the gradients. OT meshes are generated to compare with centroidal Voronoi tessellations on the sphere and are found to have advantages and disadvantages; OT equidistribution is more accurate, the number of iterations to convergence is independent of the mesh size, face skewness is reduced and the connectivity does not change. However anisotropy is higher and the OT meshes are non-orthogonal. It is shown that optimal transport on the sphere leads to meshes that do not tangle. However, tangling can be introduced by numerical errors in calculating the gradient of the mesh potential. Methods for alleviating this problem are explored. Finally, OT meshes are generated using observed precipitation as a monitor function, in order to demonstrate the potential power of the technique.

  15. Three-Dimensional Parallel Adaptive Mesh Refinement Simulations of Shock-Driven Turbulent Mixing in Plane and Converging Geometries

    SciTech Connect

    Lombardini, Manuel; Deiterding, Ralf

    2010-01-01

    This paper presents the use of a dynamically adaptive mesh refinement strategy for the simulations of shock-driven turbulent mixing. Large-eddy simulations are necessary due the high Reynolds number turbulent regime. In this approach, the large scales are simulated directly and small scales at which the viscous dissipation occurs are modeled. A low-numerical centered finite-difference scheme is used in turbulent flow regions while a shock-capturing method is employed to capture shocks. Three-dimensional parallel simulations of the Richtmyer-Meshkov instability performed in plane and converging geometries are described.

  16. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  17. Evaluation of a prototype 3D ultrasound system for multimodality imaging of cervical nodes for adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle; Fava, Palma; Cury, Fabio; Vuong, Te; Falco, Tony; Verhaegen, Frank

    2007-03-01

    Sonography has good topographic accuracy for superficial lymph node assessment in patients with head and neck cancers. It is therefore an ideal non-invasive tool for precise inter-fraction volumetric analysis of enlarged cervical nodes. In addition, when registered with computed tomography (CT) images, ultrasound information may improve target volume delineation and facilitate image-guided adaptive radiation therapy. A feasibility study was developed to evaluate the use of a prototype ultrasound system capable of three dimensional visualization and multi-modality image fusion for cervical node geometry. A ceiling-mounted optical tracking camera recorded the position and orientation of a transducer in order to synchronize the transducer's position with respect to the room's coordinate system. Tracking systems were installed in both the CT-simulator and radiation therapy treatment rooms. Serial images were collected at the time of treatment planning and at subsequent treatment fractions. Volume reconstruction was performed by generating surfaces around contours. The quality of the spatial reconstruction and semi-automatic segmentation was highly dependent on the system's ability to track the transducer throughout each scan procedure. The ultrasound information provided enhanced soft tissue contrast and facilitated node delineation. Manual segmentation was the preferred method to contour structures due to their sonographic topography.

  18. GEN3D Ver. 1.37

    SciTech Connect

    2012-01-04

    GEN3D is a three-dimensional mesh generation program. The three-dimensional mesh is generated by mapping a two-dimensional mesh into threedimensions according to one of four types of transformations: translating, rotating, mapping onto a spherical surface, and mapping onto a cylindrical surface. The generated three-dimensional mesh can then be reoriented by offsetting, reflecting about an axis, and revolving about an axis. GEN3D can be used to mesh geometries that are axisymmetric or planar, but, due to three-dimensional loading or boundary conditions, require a three-dimensional finite element mesh and analysis. More importantly, it can be used to mesh complex three-dimensional geometries composed of several sections when the sections can be defined in terms of transformations of two dimensional geometries. The code GJOIN is then used to join the separate sections into a single body. GEN3D reads and writes twodimensional and threedimensional mesh databases in the GENESIS database format; therefore, it is compatible with the preprocessing, postprocessing, and analysis codes used by the Engineering Analysis Department at Sandia National Laboratories, Albuquerque, NM.

  19. GEN3D Ver. 1.37

    2012-01-04

    GEN3D is a three-dimensional mesh generation program. The three-dimensional mesh is generated by mapping a two-dimensional mesh into threedimensions according to one of four types of transformations: translating, rotating, mapping onto a spherical surface, and mapping onto a cylindrical surface. The generated three-dimensional mesh can then be reoriented by offsetting, reflecting about an axis, and revolving about an axis. GEN3D can be used to mesh geometries that are axisymmetric or planar, but, due to three-dimensionalmore » loading or boundary conditions, require a three-dimensional finite element mesh and analysis. More importantly, it can be used to mesh complex three-dimensional geometries composed of several sections when the sections can be defined in terms of transformations of two dimensional geometries. The code GJOIN is then used to join the separate sections into a single body. GEN3D reads and writes twodimensional and threedimensional mesh databases in the GENESIS database format; therefore, it is compatible with the preprocessing, postprocessing, and analysis codes used by the Engineering Analysis Department at Sandia National Laboratories, Albuquerque, NM.« less

  20. Application of adaptive mesh refinement to particle-in-cell simulations of plasmas and beams

    SciTech Connect

    Vay, J.-L.; Colella, P.; Kwan, J.W.; McCorquodale, P.; Serafini, D.B.; Friedman, A.; Grote, D.P.; Westenskow, G.; Adam, J.-C.; Heron, A.; Haber, I.

    2003-11-04

    Plasma simulations are often rendered challenging by the disparity of scales in time and in space which must be resolved. When these disparities are in distinctive zones of the simulation domain, a method which has proven to be effective in other areas (e.g. fluid dynamics simulations) is the mesh refinement technique. We briefly discuss the challenges posed by coupling this technique with plasma Particle-In-Cell simulations, and present examples of application in Heavy Ion Fusion and related fields which illustrate the effectiveness of the approach. We also report on the status of a collaboration under way at Lawrence Berkeley National Laboratory between the Applied Numerical Algorithms Group (ANAG) and the Heavy Ion Fusion group to upgrade ANAG's mesh refinement library Chombo to include the tools needed by Particle-In-Cell simulation codes.

  1. Development 3D model of adaptation of the Azerbaijan coastal zone at the various levels of Caspian Sea

    NASA Astrophysics Data System (ADS)

    Mammadov, Ramiz

    2013-04-01

    coastal areas at hydraulic engineering projects the sea level should be considered as multistage process, what we have considered by development of adaptation of a coastal zone The exact three-dimensional map of a coastal zone has been created. For different scenario sea levels, or example, -30.0; -29.0; -28.0; -27.0; -26.0; -25.0 and -24.0 exact coastal lines have been certain. Further maps of a vegetative cover, ground, social and economic and ecological conditions have been developed for different level and respective alterations are certain. More vulnerable coastal zone, flooded area and socio-economic damage were estimated.

  2. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  3. Nematic liquid crystal around a spherical particle: Investigation of the defect structure and its stability using adaptive mesh refinement.

    PubMed

    Fukuda, Jun-Ichi; Yoneya, Makoto; Yokoyama, Hiroshi

    2004-01-01

    We investigate the orientation profile and the structure of topological defects of a nematic liquid crystal around a spherical particle using an adaptive mesh refinement scheme developed by us previously. The previous work [J. Fukuda et al., Phys. Rev. E 65, 041709 (2002)] was devoted to the investigation of the fine structure of a hyperbolic hedgehog defect that the particle accompanies and in this paper we present the equilibrium profile of the Saturn ring configuration. The radius of the Saturn ring r(d) in units of the particle radius R(0) increases weakly with the increase of Epsilon, the ratio of the nematic coherence length to R(0). Next we discuss the energetic stability of a hedgehog and a Saturn ring. The use of adaptive mesh refinement scheme together with a tensor orientational order parameter Q (alpha, beta) allows us to calculate the elastic energy of a nematic liquid crystal without any assumption of the structure and the energy of the defect core as in the previous similar studies. The reduced free energy of a nematic liquid crystal, F= F/L1RO, with L(1) being the elastic constant, is almost independent of Epsilon in the hedgehog configuration, while it shows a logarithmic dependence in the Saturn ring configuration. This result clearly indicates that the energetic stability of a hedgehog to a Saturn ring for a large particle is definitely attributed to the large defect energy of the Saturn ring with a large radius.

  4. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel. PMID:23319710

  5. From Monotonous Hop-and-Sink Swimming to Constant Gliding via Chaotic Motions in 3D: Is There Adaptive Behavior in Planktonic Micro-Crustaceans?

    NASA Astrophysics Data System (ADS)

    Strickler, J. R.

    2007-12-01

    Planktonic micro-crustaceans, such as Daphnia, Copepod, and Cyclops, swim in the 3D environment of water and feed on suspended material, mostly algae and bacteria. Their mechanisms for swimming differ; some use their swimming legs to produce one hop per second resulting in a speed of one body-length per second, while others scan water volumes with their mouthparts and glide through the water column at 1 to 10 body-lengths per second. However, our observations show that these speeds are modulated. The question to be discussed will be whether or not these modulations show adaptive behavior taking food quality and food abundance as criteria for the swimming performances. Additionally, we investigated the degree these temporal motion patterns are dependant on the sizes, and therefore, on the Reynolds number of the animals.

  6. An adaptive-mesh finite-difference solution method for the Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Luchini, Paolo

    1987-02-01

    An adjustable variable-spacing grid is presented which permits the addition or deletion of single points during iterative solutions of the Navier-Stokes equations by finite difference methods. The grid is designed for application to two-dimensional steady-flow problems which can be described by partial differential equations whose second derivatives are constrained to the Laplacian operator. An explicit Navier-Stokes equations solution technique defined for use with the grid incorporates a hybrid form of the convective terms. Three methods are developed for automatic modifications of the mesh during calculations.

  7. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  8. A novel adaptive biogeochemical model, and its 3-D application for a decadal hindcast simulation of the biogeochemistry of the southern North Sea

    NASA Astrophysics Data System (ADS)

    Kerimoglu, Onur; Hofmeister, Richard; Wirtz, Kai

    2016-04-01

    Adaptation and acclimation processes are often ignored in ecosystem-scale model implementations, despite the long-standing recognition of their importance. Here we present a novel adaptive phytoplankton growth model where acclimation of the community to the changes in external resource ratios is accounted for, using optimality principles and dynamic physiological traits. We show that the model can reproduce the internal stoichiometries obtained at marginal supply ratios in chemostat experiments. The model is applied in a decadal hindcast simulation of the southern North Sea, where it is coupled to a 2-D benthic model and a 3-D hydrodynamic model in an approximately 1.5km horizontal resolution at the German Bight coast. The model is shown to have good skill in capturing the steep, coastal gradients in the German Bight, suggested by the match between the estimated and observed dissolved nutrient and chlorophyll concentrations. We then analyze the differential sensitivity of the coastal and off-shore zones to major drivers of the system, such as riverine nutrient loads. We demonstrate that the relevance of phytoplankton acclimation varies across coastal gradients and can become particularly significant in terms of summer nutrient depletion.

  9. Adaptation of an unstructured-mesh, finite-element ocean model to the simulation of ocean circulation beneath ice shelves

    NASA Astrophysics Data System (ADS)

    Kimura, Satoshi; Candy, Adam S.; Holland, Paul R.; Piggott, Matthew D.; Jenkins, Adrian

    2013-07-01

    Several different classes of ocean model are capable of representing floating glacial ice shelves. We describe the incorporation of ice shelves into Fluidity-ICOM, a nonhydrostatic finite-element ocean model with the capacity to utilize meshes that are unstructured and adaptive in three dimensions. This geometric flexibility offers several advantages over previous approaches. The model represents melting and freezing on all ice-shelf surfaces including vertical faces, treats the ice shelf topography as continuous rather than stepped, and does not require any smoothing of the ice topography or any of the additional parameterisations of the ocean mixed layer used in isopycnal or z-coordinate models. The model can also represent a water column that decreases to zero thickness at the 'grounding line', where the floating ice shelf is joined to its tributary ice streams. The model is applied to idealised ice-shelf geometries in order to demonstrate these capabilities. In these simple experiments, arbitrarily coarsening the mesh outside the ice-shelf cavity has little effect on the ice-shelf melt rate, while the mesh resolution within the cavity is found to be highly influential. Smoothing the vertical ice front results in faster flow along the smoothed ice front, allowing greater exchange with the ocean than in simulations with a realistic ice front. A vanishing water-column thickness at the grounding line has little effect in the simulations studied. We also investigate the response of ice shelf basal melting to variations in deep water temperature in the presence of salt stratification.

  10. Extraction and tracking of MRI tagging sheets using a 3D Gabor filter bank.

    PubMed

    Qian, Zhen; Metaxas, Dimitris N; Axel, Leon

    2006-01-01

    In this paper, we present a novel method for automatically extracting the tagging sheets in tagged cardiac MR images, and tracking their displacement during the heart cycle, using a tunable 3D Gabor filter bank. Tagged MRI is a non-invasive technique for the study of myocardial deformation. We design the 3D Gabor filter bank based on the geometric characteristics of the tagging sheets. The tunable parameters of the Gabor filter bank are used to adapt to the myocardium deformation. The whole 3D image dataset is convolved with each Gabor filter in the filter bank, in the Fourier domain. Then we impose a set of deformable meshes onto the extracted tagging sheets and track them over time. Dynamic estimation of the filter parameters and the mesh internal smoothness are used to help the tracking. Some very encouraging results are shown.

  11. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  12. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    PubMed

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed.

  13. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina

    PubMed Central

    Zawadzki, Robert J.; Zhang, Pengfei; Zam, Azhar; Miller, Eric B.; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S.; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G.; Werner, John S.; Burns, Marie E.; Pugh, Edward N.

    2015-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  14. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  15. 3D Printing: 3D Printing of Highly Stretchable and Tough Hydrogels into Complex, Cellularized Structures.

    PubMed

    Hong, Sungmin; Sycks, Dalton; Chan, Hon Fai; Lin, Shaoting; Lopez, Gabriel P; Guilak, Farshid; Leong, Kam W; Zhao, Xuanhe

    2015-07-15

    X. Zhao and co-workers develop on page 4035 a new biocompatible hydrogel system that is extremely tough and stretchable and can be 3D printed into complex structures, such as the multilayer mesh shown. Cells encapsulated in the tough and printable hydrogel maintain high viability. 3D-printed structures of the tough hydrogel can sustain high mechanical loads and deformations.

  16. A more efficient anisotropic mesh adaptation for the computation of Lagrangian coherent structures

    NASA Astrophysics Data System (ADS)

    Fortin, A.; Briffard, T.; Garon, A.

    2015-03-01

    The computation of Lagrangian coherent structures is more and more used in fluid mechanics to determine subtle fluid flow structures. We present in this paper a new adaptive method for the efficient computation of Finite Time Lyapunov Exponent (FTLE) from which the coherent Lagrangian structures can be obtained. This new adaptive method considerably reduces the computational burden without any loss of accuracy on the FTLE field.

  17. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  18. Polyhedral shape model for terrain correction of gravity and gravity gradient data based on an adaptive mesh

    NASA Astrophysics Data System (ADS)

    Guo, Zhikui; Chen, Chao; Tao, Chunhui

    2016-04-01

    Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model

  19. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  20. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  1. Beam Optics Analysis — An Advanced 3D Trajectory Code

    NASA Astrophysics Data System (ADS)

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-01

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  2. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  3. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

    NASA Astrophysics Data System (ADS)

    Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

    2015-03-01

    Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

  4. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  5. woptic: Optical conductivity with Wannier functions and adaptive k-mesh refinement

    NASA Astrophysics Data System (ADS)

    Assmann, E.; Wissgott, P.; Kuneš, J.; Toschi, A.; Blaha, P.; Held, K.

    2016-05-01

    We present an algorithm for the adaptive tetrahedral integration over the Brillouin zone of crystalline materials, and apply it to compute the optical conductivity, dc conductivity, and thermopower. For these quantities, whose contributions are often localized in small portions of the Brillouin zone, adaptive integration is especially relevant. Our implementation, the woptic package, is tied into the WIEN2WANNIER framework and allows including a local many-body self energy, e.g. from dynamical mean-field theory (DMFT). Wannier functions and dipole matrix elements are computed with the DFT package WIEN2k and Wannier90. For illustration, we show DFT results for fcc-Al and DMFT results for the correlated metal SrVO3.

  6. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  7. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  8. Impossible expectations: fMRI adaptation in the lateral occipital complex (LOC) is modulated by the statistical regularities of 3D structural information.

    PubMed

    Freud, Erez; Ganel, Tzvi; Avidan, Galia

    2015-11-15

    fMRI adaptation (fMRIa), the attenuation of fMRI signal which follows repeated presentation of a stimulus, is a well-documented phenomenon. Yet, the underlying neural mechanisms supporting this effect are not fully understood. Recently, short-term perceptual expectations, induced by specific experimental settings, were shown to play an important modulating role in fMRIa. Here we examined the role of long-term expectations, based on 3D structural statistical regularities, in the modulation of fMRIa. To this end, human participants underwent fMRI scanning while performing a same-different task on pairs of possible (regular, expected) objects and spatially impossible (irregular, unexpected) objects. We hypothesized that given the spatial irregularity of impossible objects in relation to real-world visual experience, the visual system would always generate a prediction which is biased to the possible version of the objects. Consistently, fMRIa effects in the lateral occipital cortex (LOC) were found for possible, but not for impossible objects. Additionally, in alternating trials the order of stimulus presentation modulated LOC activity. That is, reduced activation was observed in trials in which the impossible version of the object served as the prime object (i.e. first object) and was followed by the possible version compared to the reverse order. These results were also supported by the behavioral advantage observed for trials that were primed by possible objects. Together, these findings strongly emphasize the importance of perceptual expectations in object representation and provide novel evidence for the role of real-world statistical regularities in eliciting fMRIa.

  9. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  10. GrACE-PPM: A distributed dynamic adaptive mesh CFD Environment for accelerated inhomogeneous compressible flows

    NASA Astrophysics Data System (ADS)

    Zhang, Shuang; Parashar, Manish; Zabusky, Norman

    2001-11-01

    We merge the PPM compressible algorithm (VH-1 (M. Parashar, Grid Adaptive Computational Engine. 2001. http://www.caip.rutgers.edu/ parashar/TASSL/Projects/GrACE/Gmain.html)) with the new Grid Adaptive Computation Engine (GrACE ( J. M. Blondin and J. Hawley, Virginia Hydrodynamics Code. http://wonka.physics.ncsu.edu/pub/VH-1/index.html)). The latter environment uses the Berger-Oliger AMR algorithm and has many high-performance computation features such as data parallelism, data and computation locality, etc. We discuss the performance (scaling) resulting from examining the space of four parameters: top coarse level resolution; number of refinement levels; number of processors; duration of calculation. We validate the new code by applying it to the 2D shock-curtain interaction problem (N. J. Zabusky and S. Zhang. "Shock - planar curtain interactions in 2D: Emergence of vortex double layers, vortex projectiles and decaying stratified turbulence." Revised submitted Physics of Fluids, July, 2001.). We discuss the visualization and quantification of AMR data sets.

  11. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  12. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  13. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  14. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  15. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  16. The formation of entropy cores in non-radiative galaxy cluster simulations: smoothed particle hydrodynamics versus adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Power, C.; Read, J. I.; Hobbs, A.

    2014-06-01

    We simulate cosmological galaxy cluster formation using three different approaches to solving the equations of non-radiative hydrodynamics - classic smoothed particle hydrodynamics (SPH), novel SPH with a higher order dissipation switch (SPHS), and an adaptive mesh refinement (AMR) method. Comparing spherically averaged entropy profiles, we find that SPHS and AMR approaches result in a well-defined entropy core that converges rapidly with increasing mass and force resolution. In contrast, the central entropy profile in the SPH approach is sensitive to the cluster's assembly history and shows poor numerical convergence. We trace this disagreement to the known artificial surface tension in SPH that appears at phase boundaries. Varying systematically numerical dissipation in SPHS, we study the contributions of numerical and physical dissipation to the entropy core and argue that numerical dissipation is required to ensure single-valued fluid quantities in converging flows. However, provided it occurs only at the resolution limit and does not propagate errors to larger scales, its effect is benign - there is no requirement to build `sub-grid' models of unresolved turbulence for galaxy cluster simulations. We conclude that entropy cores in non-radiative galaxy cluster simulations are physical, resulting from entropy generation in shocked gas during cluster assembly.

  17. Temperature Structure of the Intracluster Medium from Smoothed-particle Hydrodynamics and Adaptive-mesh Refinement Simulations

    NASA Astrophysics Data System (ADS)

    Rasia, Elena; Lau, Erwin T.; Borgani, Stefano; Nagai, Daisuke; Dolag, Klaus; Avestruz, Camille; Granato, Gian Luigi; Mazzotta, Pasquale; Murante, Giuseppe; Nelson, Kaylea; Ragone-Figueroa, Cinthia

    2014-08-01

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R 500 the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  18. Temperature structure of the intracluster medium from smoothed-particle hydrodynamics and adaptive-mesh refinement simulations

    SciTech Connect

    Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea

    2014-08-20

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  19. Io's Plasma Environment During the Galileo Flyby: Global Three-Dimensional MHD Modeling with Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Combi, M. R.; Kabin, K.; Gombosi, T. I.; DeZeeuw, D. L.; Powell, K. G.

    1998-01-01

    The first results for applying a three-dimensional multimedia ideal MHD model for the mass-loaded flow of Jupiter's corotating magnetospheric plasma past Io are presented. The model is able to consider simultaneously physically realistic conditions for ion mass loading, ion-neutral drag, and intrinsic magnetic field in a full global calculation without imposing artificial dissipation. Io is modeled with an extended neutral atmosphere which loads the corotating plasma torus flow with mass, momentum, and energy. The governing equations are solved using adaptive mesh refinement on an unstructured Cartesian grid using an upwind scheme for AHMED. For the work described in this paper we explored a range of models without an intrinsic magnetic field for Io. We compare our results with particle and field measurements made during the December 7, 1995, flyby of to, as published by the Galileo Orbiter experiment teams. For two extreme cases of lower boundary conditions at Io, our model can quantitatively explain the variation of density along the spacecraft trajectory and can reproduce the general appearance of the variations of magnetic field and ion pressure and temperature. The net fresh ion mass-loading rates are in the range of approximately 300-650 kg/s, and equivalent charge exchange mass-loading rates are in the range approximately 540-1150 kg/s in the vicinity of Io.

  20. Thickness distribution of a cooling pyroclastic flow deposit: Optimization using InSAR, FEMs, and an adaptive mesh algorithm

    NASA Astrophysics Data System (ADS)

    Masterlark, T.; Lu, Z.; Rykhus, R.

    2003-12-01

    We construct finite element models (FEMs) of a pyroclastic flow deposit (PFD) emplaced during the 1986 eruption of Augustine volcano, Alaska. Interferometric synthetic aperture radar (InSAR) imagery documents the consistent contraction of the PFD during 1992-2000. Three-dimensional problem domains of the FEMs include an elastic substrate overlain by a thermoelastic material representing the PFD. The geometry of the substrate is determined from a digital elevation model (DEM) and bathymetry data. The thickness of the PFD is initially determined from the difference between post- and pre-eruptive DEMs. Systematic prediction errors suggest the PFD thickness distribution, estimated from the DEM difference, is inaccurate. We combine InSAR images, FEMs, and an adaptive mesh algorithm to re-estimate the geometry of the PFD and optimize the thickness distribution for the PFD. Prediction errors from the FEM that includes an optimized PFD geometry are reduced by 20% with respect to those from an FEM that includes a PFD geometry derived from the DEM difference.