Science.gov

Sample records for 3d adaptive mesh

  1. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  2. Content-Adaptive Finite Element Mesh Generation of 3-D Complex MR Volumes for Bioelectromagnetic Problems.

    PubMed

    Lee, W; Kim, T-S; Cho, M; Lee, S

    2005-01-01

    In studying bioelectromagnetic problems, finite element method offers several advantages over other conventional methods such as boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropy. Mesh generation is the first requirement in the finite element analysis and there are many different approaches in mesh generation. However conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes, resulting in numerous elements in the smaller volume regions, thereby increasing computational load and demand. In this work, we present an improved content-adaptive mesh generation scheme that is efficient and fast along with options to change the contents of meshes. For demonstration, mesh models of the head from a volume MRI are presented in 2-D and 3-D.

  3. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  4. Scalable and Adaptive Streaming of 3D Mesh to Heterogeneous Devices

    NASA Astrophysics Data System (ADS)

    Abderrahim, Zeineb; Bouhlel, Mohamed Salim

    2016-12-01

    This article comprises a presentation of a web platform for the diffusion and visualization of 3D compressed data on the web. Indeed, the major goal of this work resides in the proposal of the transfer adaptation of the three-dimensional data to resources (network bandwidth, the type of visualization terminals, display resolution, user's preferences...). Also, it is an attempt to provide an effective consultation adapted to the user's request (preferences, levels of the requested detail, etc.). Such a platform can adapt the levels of detail to the change in the bandwidth and the rendering time when loading the mesh at the client level. In addition, the levels of detail are adapted to the distance between the object and the camera. These features are able to minimize the latency time and to make the real time interaction possible. The experiences as well as the comparison with the existing solutions show auspicious results in terms of latency, scalability and the quality of the experience offered to the users.

  5. A mesh adaptivity scheme on the Landau-de Gennes functional minimization case in 3D, and its driving efficiency

    NASA Astrophysics Data System (ADS)

    Bajc, Iztok; Hecht, Frédéric; Žumer, Slobodan

    2016-09-01

    This paper presents a 3D mesh adaptivity strategy on unstructured tetrahedral meshes by a posteriori error estimates based on metrics derived from the Hessian of a solution. The study is made on the case of a nonlinear finite element minimization scheme for the Landau-de Gennes free energy functional of nematic liquid crystals. Newton's iteration for tensor fields is employed with steepest descent method possibly stepping in. Aspects relating the driving of mesh adaptivity within the nonlinear scheme are considered. The algorithmic performance is found to depend on at least two factors: when to trigger each single mesh adaptation, and the precision of the correlated remeshing. Each factor is represented by a parameter, with its values possibly varying for every new mesh adaptation. We empirically show that the time of the overall algorithm convergence can vary considerably when different sequences of parameters are used, thus posing a question about optimality. The extensive testings and debugging done within this work on the simulation of systems of nematic colloids substantially contributed to the upgrade of an open source finite element-oriented programming language to its 3D meshing possibilities, as also to an outer 3D remeshing module.

  6. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    SciTech Connect

    B. Philip; Z. Wang; M.A. Berrill; M. Birke; M. Pernice

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton–Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  7. Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion

    NASA Astrophysics Data System (ADS)

    Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.

    2014-04-01

    The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.

  8. Adaptive optimal quantization for 3D mesh representation in the spherical coordinate system

    NASA Astrophysics Data System (ADS)

    Ahn, Jeong-Hwan; Ho, Yo-Sung

    1998-12-01

    In recent days, applications using 3D models are increasing. Since the 3D model contains a huge amount of information, compression of the 3D model data is necessary for efficient storage or transmission. In this paper, we propose an adaptive encoding scheme to compress the geometry information of the 3D model. Using the Levinson-Durbin algorithm, the encoder first predicts vertex positions along a vertex spanning tree. After each prediction error is normalized, the prediction error vector of each vertex point is represented in the spherical coordinate system (r,(theta) ,(phi) ). Each r is then quantizes by an optimal uniform quantizer. A pair of each ((theta) ,(phi) ) is also successively encoded by partitioning the surface of the sphere according to the quantized value of r. The proposed scheme demonstrates improved coding efficiency by exploiting the statistical properties of r and ((theta) ,(phi) ).

  9. 3D Adaptive Mesh Refinement Simulations of Pellet Injection in Tokamaks

    SciTech Connect

    R. Samtaney; S.C. Jardin; P. Colella; D.F. Martin

    2003-10-20

    We present results of Adaptive Mesh Refinement (AMR) simulations of the pellet injection process, a proven method of refueling tokamaks. AMR is a computationally efficient way to provide the resolution required to simulate realistic pellet sizes relative to device dimensions. The mathematical model comprises of single-fluid MHD equations with source terms in the continuity equation along with a pellet ablation rate model. The numerical method developed is an explicit unsplit upwinding treatment of the 8-wave formulation, coupled with a MAC projection method to enforce the solenoidal property of the magnetic field. The Chombo framework is used for AMR. The role of the E x B drift in mass redistribution during inside and outside pellet injections is emphasized.

  10. 3D Boltzmann Simulation of the Io's Plasma Environment with Adaptive Mesh and Particle Refinement

    NASA Astrophysics Data System (ADS)

    Lipatov, A. S.; Combi, M. R.

    2002-12-01

    The global dynamics of the ionized and neutral components in the environment of Io plays an important role in the interaction of Jupiter's corotating magnetospheric plasma with Io [Combi et al., 2002; 1998; Kabin et al., 2001]. The stationary simulation of this problem was done in the MHD [Combi et al., 1998; Linker et al, 1998; Kabin et al., 2001] and the electrodynamic [Saur et al., 1999] approaches. In this report, we develop a method of kinetic ion-neutral simulation, which is based on a multiscale adaptive mesh, particle and algorithm refinement. This method employs the fluid description for electrons whereas for ions the drift-kinetic and particle approaches are used. This method takes into account charge-exchange and photoionization processes. The first results of such simulation of the dynamics of ions in the Io's environment are discussed in this report. ~ M R Combi et al., J. Geophys. Res., 103, 9071, 1998. M R Combi, T I Gombosi, K Kabin, Atmospheres in the Solar System: Comparative\\ Aeronomy. Geophys. Monograph Series, 130, 151, 2002. K Kabin et al., Planetary and Space Sci., 49, 337, 2001. J A Linker et al., J. Geophys. Res., 103(E9), 19867, 1998. J Saur et al., J. Geophys. Res., 104, 25105, 1999.

  11. 3-D grid refinement using the University of Michigan adaptive mesh library for a pure advective test

    NASA Astrophysics Data System (ADS)

    Oehmke, R.; Vandenberg, D.; Andronova, N.; Penner, J.; Stout, Q.; Zubov, V.; Jablonowski, C.

    2008-05-01

    The numerical representation of the partial differential equations (PDE) for high resolution atmospheric dynamical and physical features requires division of the atmospheric volume into a set of 3D grids, each of which has a not quite rectangular form. Each location on the grid contains multiple data that together represent the state of Earth's atmosphere. For successful numerical integration of the PDEs the size of each grid box is used to define the Courant-Friedrichs-Levi criterion in setting the time step. 3D adaptive representations of a sphere are needed to represent the evolution of clouds. In this paper we present the University of Michigan adaptive mesh library - a library that supports the production of parallel codes with use of adaptation on a sphere. The library manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits blocks as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells — the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. Users provide data manipulation functions for performing interpolation of user data when refining blocks. We rigorously test the library using refinement of the modeled vertical transport of a tracer with prescribed atmospheric sources and sinks. It is both a 2 and a 3D test, and bridges the performance of the model's dynamics and physics needed for inclusion of cloud formation.

  12. The 2D and 3D hypersonic flows with unstructured meshes

    NASA Technical Reports Server (NTRS)

    Thareja, Rajiv

    1993-01-01

    Viewgraphs on 2D and 3D hypersonic flows with unstructured meshes are presented. Topics covered include: mesh generation, mesh refinement, shock-shock interaction, velocity contours, mesh movement, vehicle bottom surface, and adapted meshes.

  13. Vertical Scan (V-SCAN) for 3-D Grid Adaptive Mesh Refinement for an atmospheric Model Dynamical Core

    NASA Astrophysics Data System (ADS)

    Andronova, N. G.; Vandenberg, D.; Oehmke, R.; Stout, Q. F.; Penner, J. E.

    2009-12-01

    One of the major building blocks of a rigorous representation of cloud evolution in global atmospheric models is a parallel adaptive grid MPI-based communication library (an Adaptive Blocks for Locally Cartesian Topologies library -- ABLCarT), which manages the block-structured data layout, handles ghost cell updates among neighboring blocks and splits a block as refinements occur. The library has several modules that provide a layer of abstraction for adaptive refinement: blocks, which contain individual cells of user data; shells - the global geometry for the problem, including a sphere, reduced sphere, and now a 3D sphere; a load balancer for placement of blocks onto processors; and a communication support layer which encapsulates all data movement. A major performance concern with adaptive mesh refinement is how to represent calculations that have need to be sequenced in a particular order in a direction, such as calculating integrals along a specific path (e.g. atmospheric pressure or geopotential in the vertical dimension). This concern is compounded if the blocks have varying levels of refinement, or are scattered across different processors, as can be the case in parallel computing. In this paper we describe an implementation in ABLCarT of a vertical scan operation, which allows computing along vertical paths in the correct order across blocks transparent to their resolution and processor location. We test this functionality on a 2D and a 3D advection problem, which tests the performance of the model’s dynamics (transport) and physics (sources and sinks) for different model resolutions needed for inclusion of cloud formation.

  14. 3-D Mesh Generation Nonlinear Systems

    SciTech Connect

    Christon, M. A.; Dovey, D.; Stillman, D. W.; Hallquist, J. O.; Rainsberger, R. B

    1994-04-07

    INGRID is a general-purpose, three-dimensional mesh generator developed for use with finite element, nonlinear, structural dynamics codes. INGRID generates the large and complex input data files for DYNA3D, NIKE3D, FACET, and TOPAZ3D. One of the greatest advantages of INGRID is that virtually any shape can be described without resorting to wedge elements, tetrahedrons, triangular elements or highly distorted quadrilateral or hexahedral elements. Other capabilities available are in the areas of geometry and graphics. Exact surface equations and surface intersections considerably improve the ability to deal with accurate models, and a hidden line graphics algorithm is included which is efficient on the most complicated meshes. The primary new capability is associated with the boundary conditions, loads, and material properties required by nonlinear mechanics programs. Commands have been designed for each case to minimize user effort. This is particularly important since special processing is almost always required for each load or boundary condition.

  15. 3D-Meshes aus medizinischen Volumendaten

    NASA Astrophysics Data System (ADS)

    Zelzer, Sascha; Meinzer, Hans-Peter

    Diese Arbeit beschreibt eine template-basierte Methode zur Erzeugung von adaptiven Hexaeder-Meshes aus Volumendaten, welche komplizierte konkave Strukturen aufweisen können. Es wird ein vollständiger Satz von Templates generiert der es erlaubt, die Ränder konkaver Regionen feiner zu zerlegen als angrenzende Bereiche und somit die Gesamtzahl an Hexaeder verringert. Der Algorithmus arbeitet mit beliebigen gelabelten Volumendaten und erzeugt ein adaptives, konformes, reines Hexaeder-Mesh.

  16. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  17. 3D Structured Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Banks, D. W.; Hafez, M. M.

    1996-01-01

    Grid adaptation for structured meshes is the art of using information from an existing, but poorly resolved, solution to automatically redistribute the grid points in such a way as to improve the resolution in regions of high error, and thus the quality of the solution. This involves: (1) generate a grid vis some standard algorithm, (2) calculate a solution on this grid, (3) adapt the grid to this solution, (4) recalculate the solution on this adapted grid, and (5) repeat steps 3 and 4 to satisfaction. Steps 3 and 4 can be repeated until some 'optimal' grid is converged to but typically this is not worth the effort and just two or three repeat calculations are necessary. They also may be repeated every 5-10 time steps for unsteady calculations.

  18. Advanced numerical methods in mesh generation and mesh adaptation

    SciTech Connect

    Lipnikov, Konstantine; Danilov, A; Vassilevski, Y; Agonzal, A

    2010-01-01

    -based error estimates. We conclude that the quasi-optimal mesh must be quasi-uniform in this metric. All numerical experiments are based on the publicly available Ani3D package, the collection of advanced numerical instruments.

  19. Unstructured mesh generation and adaptivity

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current unstructured mesh generation and adaptivity techniques is given. Basic building blocks taken from the field of computational geometry are first described. Various practical mesh generation techniques based on these algorithms are then constructed and illustrated with examples. Issues of adaptive meshing and stretched mesh generation for anisotropic problems are treated in subsequent sections. The presentation is organized in an education manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  20. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  1. 3D unstructured mesh discontinuous finite element hydro

    SciTech Connect

    Prasad, M.K.; Kershaw, D.S.; Shaw, M.J.

    1995-07-01

    The authors present detailed features of the ICF3D hydrodynamics code used for inertial fusion simulations. This code is intended to be a state-of-the-art upgrade of the well-known fluid code, LASNEX. ICF3D employs discontinuous finite elements on a discrete unstructured mesh consisting of a variety of 3D polyhedra including tetrahedra, prisms, and hexahedra. The authors discussed details of how the ROE-averaged second-order convection was applied on the discrete elements, and how the C++ coding interface has helped to simplify implementing the many physics and numerics modules within the code package. The author emphasized the virtues of object-oriented design in large scale projects such as ICF3D.

  2. 3D Adaptive Mesh Refinement Simulations of the Gas Cloud G2 Born within the Disks of Young Stars in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position-velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  3. 3D ADAPTIVE MESH REFINEMENT SIMULATIONS OF THE GAS CLOUD G2 BORN WITHIN THE DISKS OF YOUNG STARS IN THE GALACTIC CENTER

    SciTech Connect

    Schartmann, M.; Ballone, A.; Burkert, A.; Gillessen, S.; Genzel, R.; Pfuhl, O.; Eisenhauer, F.; Plewa, P. M.; Ott, T.; George, E. M.; Habibi, M.

    2015-10-01

    The dusty, ionized gas cloud G2 is currently passing the massive black hole in the Galactic Center at a distance of roughly 2400 Schwarzschild radii. We explore the possibility of a starting point of the cloud within the disks of young stars. We make use of the large amount of new observations in order to put constraints on G2's origin. Interpreting the observations as a diffuse cloud of gas, we employ three-dimensional hydrodynamical adaptive mesh refinement (AMR) simulations with the PLUTO code and do a detailed comparison with observational data. The simulations presented in this work update our previously obtained results in multiple ways: (1) high resolution three-dimensional hydrodynamical AMR simulations are used, (2) the cloud follows the updated orbit based on the Brackett-γ data, (3) a detailed comparison to the observed high-quality position–velocity (PV) diagrams and the evolution of the total Brackett-γ luminosity is done. We concentrate on two unsolved problems of the diffuse cloud scenario: the unphysical formation epoch only shortly before the first detection and the too steep Brackett-γ light curve obtained in simulations, whereas the observations indicate a constant Brackett-γ luminosity between 2004 and 2013. For a given atmosphere and cloud mass, we find a consistent model that can explain both, the observed Brackett-γ light curve and the PV diagrams of all epochs. Assuming initial pressure equilibrium with the atmosphere, this can be reached for a starting date earlier than roughly 1900, which is close to apo-center and well within the disks of young stars.

  4. Hough transform-based 3D mesh retrieval

    NASA Astrophysics Data System (ADS)

    Zaharia, Titus; Preteux, Francoise J.

    2001-11-01

    This papre addresses the issue of 3D mesh indexation by using shape descriptors (SDs) under constraints of geometric and topological invariance. A new shape descriptor, the Optimized 3D Hough Transform Descriptor (O3HTD) is here proposed. Intrinsically topologically stable, the O3DHTD is not invariant to geometric transformations. Nevertheless, we show mathematically how the O3DHTD can be optimally associated (in terms of compactness of representation and computational complexity) with a spatial alignment procedure which leads to a geometric invariant behavior. Experimental results have been carried out upon the MPEG-7 3D model database consisting of about 1300 meshes in VRML 2.0 format. Objective retrieval results, based upon the definition of a categorized ground truth subset, are reported in terms of Bull Eye Percentage (BEP) score and compared to those obtained by applying the MPEg-7 3D SD. It is shown that the O3DHTD outperforms the MPEg-7 3D SD of up to 28%.

  5. A Mechanistic Study of Wetting Superhydrophobic Porous 3D Meshes.

    PubMed

    Yohe, Stefan T; Freedman, Jonathan D; Falde, Eric J; Colson, Yolonda L; Grinstaff, Mark W

    2013-08-07

    Superhydrophobic, porous, 3D materials composed of poly( ε -caprolactone) (PCL) and the hydrophobic polymer dopant poly(glycerol monostearate- co- ε -caprolactone) (PGC-C18) are fabricated using the electrospinning technique. These 3D materials are distinct from 2D superhydrophobic surfaces, with maintenance of air at the surface as well as within the bulk of the material. These superhydrophobic materials float in water, and when held underwater and pressed, an air bubble is released and will rise to the surface. By changing the PGC-C18 doping concentration in the meshes and/or the fiber size from the micro- to nanoscale, the long-term stability of the entrapped air layer is controlled. The rate of water infiltration into the meshes, and the resulting displacement of the entrapped air, is quantitatively measured using X-ray computed tomography. The properties of the meshes are further probed using surfactants and solvents of different surface tensions. Finally, the application of hydraulic pressure is used to quantify the breakthrough pressure to wet the meshes. The tools for fabrication and analysis of these superhydrophobic materials as well as the ability to control the robustness of the entrapped air layer are highly desirable for a number of existing and emerging applications.

  6. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the

  7. A skinning prediction scheme for dynamic 3D mesh compression

    NASA Astrophysics Data System (ADS)

    Mamou, Khaled; Zaharia, Titus; Prêteux, Françoise

    2006-08-01

    This paper presents a new prediction-based compression technique for dynamic 3D meshes with constant connectivity and time-varying geometry. The core of the proposed algorithm is a skinning model used for motion compensation. The mesh is first partitioned within vertex clusters that can be described by a single affine motion model. The proposed segmentation technique automatically determines the number of clusters and relays on a decimation strategy privileging the simplification of vertices exhibiting the same affine motion over the whole animation sequence. The residual prediction errors are finally compressed using a temporal-DCT representation. The performances of our encoder are objectively evaluated on a data set of eight animation sequences with various sizes, geometries and topologies, and exhibiting both rigid and elastic motions. The experimental evaluation shows that the proposed compression scheme outperforms state of the art techniques such as MPEG-4/AFX, Dynapack, RT, GV, MCGV, TDCT, PCA and RT compression schemes.

  8. A tetrahedral mesh generation approach for 3D marine controlled-source electromagnetic modeling

    NASA Astrophysics Data System (ADS)

    Um, Evan Schankee; Kim, Seung-Sep; Fu, Haohuan

    2017-03-01

    3D finite-element (FE) mesh generation is a major hurdle for marine controlled-source electromagnetic (CSEM) modeling. In this paper, we present a FE discretization operator (FEDO) that automatically converts a 3D finite-difference (FD) model into reliable and efficient tetrahedral FE meshes for CSEM modeling. FEDO sets up wireframes of a background seabed model that precisely honors the seafloor topography. The wireframes are then partitioned into multiple regions. Outer regions of the wireframes are discretized with coarse tetrahedral elements whose maximum size is as large as a skin depth of the regions. We demonstrate that such coarse meshes can produce accurate FE solutions because numerical dispersion errors of tetrahedral meshes do not accumulate but oscillates. In contrast, central regions of the wireframes are discretized with fine tetrahedral elements to describe complex geology in detail. The conductivity distribution is mapped from FD to FE meshes in a volume-averaged sense. To avoid excessive mesh refinement around receivers, we introduce an effective receiver size. Major advantages of FEDO are summarized as follow. First, FEDO automatically generates reliable and economic tetrahedral FE meshes without adaptive meshing or interactive CAD workflows. Second, FEDO produces FE meshes that precisely honor the boundaries of the seafloor topography. Third, FEDO derives multiple sets of FE meshes from a given FD model. Each FE mesh is optimized for a different set of sources and receivers and is fed to a subgroup of processors on a parallel computer. This divide and conquer approach improves the parallel scalability of the FE solution. Both accuracy and effectiveness of FEDO are demonstrated with various CSEM examples.

  9. Conservative Patch Algorithm and Mesh Sequencing for PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. P.; Abdol-Hamid, K. S.

    2005-01-01

    A mesh-sequencing algorithm and a conservative patched-grid-interface algorithm (hereafter Patch Algorithm ) have been incorporated into the PAB3D code, which is a computer program that solves the Navier-Stokes equations for the simulation of subsonic, transonic, or supersonic flows surrounding an aircraft or other complex aerodynamic shapes. These algorithms are efficient, flexible, and have added tremendously to the capabilities of PAB3D. The mesh-sequencing algorithm makes it possible to perform preliminary computations using only a fraction of the grid cells (provided the original cell count is divisible by an integer) along any grid coordinate axis, independently of the other axes. The patch algorithm addresses another critical need in multi-block grid situation where the cell faces of adjacent grid blocks may not coincide, leading to errors in calculating fluxes of conserved physical quantities across interfaces between the blocks. The patch algorithm, based on the Stokes integral formulation of the applicable conservation laws, effectively matches each of the interfacial cells on one side of the block interface to the corresponding fractional cell area pieces on the other side. This approach is comprehensive and unified such that all interface topology is automatically processed without user intervention. This algorithm is implemented in a preprocessing code that creates a cell-by-cell database that will maintain flux conservation at any level of full or reduced grid density as the user may choose by way of the mesh-sequencing algorithm. These two algorithms have enhanced the numerical accuracy of the code, reduced the time and effort for grid preprocessing, and provided users with the flexibility of performing computations at any desired full or reduced grid resolution to suit their specific computational requirements.

  10. 3D meshes of carbon nanotubes guide functional reconnection of segregated spinal explants.

    PubMed

    Usmani, Sadaf; Aurand, Emily Rose; Medelin, Manuela; Fabbro, Alessandra; Scaini, Denis; Laishram, Jummi; Rosselli, Federica B; Ansuini, Alessio; Zoccolan, Davide; Scarselli, Manuela; De Crescenzi, Maurizio; Bosi, Susanna; Prato, Maurizio; Ballerini, Laura

    2016-07-01

    In modern neuroscience, significant progress in developing structural scaffolds integrated with the brain is provided by the increasing use of nanomaterials. We show that a multiwalled carbon nanotube self-standing framework, consisting of a three-dimensional (3D) mesh of interconnected, conductive, pure carbon nanotubes, can guide the formation of neural webs in vitro where the spontaneous regrowth of neurite bundles is molded into a dense random net. This morphology of the fiber regrowth shaped by the 3D structure supports the successful reconnection of segregated spinal cord segments. We further observed in vivo the adaptability of these 3D devices in a healthy physiological environment. Our study shows that 3D artificial scaffolds may drive local rewiring in vitro and hold great potential for the development of future in vivo interfaces.

  11. 3D meshes of carbon nanotubes guide functional reconnection of segregated spinal explants

    PubMed Central

    Usmani, Sadaf; Aurand, Emily Rose; Medelin, Manuela; Fabbro, Alessandra; Scaini, Denis; Laishram, Jummi; Rosselli, Federica B.; Ansuini, Alessio; Zoccolan, Davide; Scarselli, Manuela; De Crescenzi, Maurizio; Bosi, Susanna; Prato, Maurizio; Ballerini, Laura

    2016-01-01

    In modern neuroscience, significant progress in developing structural scaffolds integrated with the brain is provided by the increasing use of nanomaterials. We show that a multiwalled carbon nanotube self-standing framework, consisting of a three-dimensional (3D) mesh of interconnected, conductive, pure carbon nanotubes, can guide the formation of neural webs in vitro where the spontaneous regrowth of neurite bundles is molded into a dense random net. This morphology of the fiber regrowth shaped by the 3D structure supports the successful reconnection of segregated spinal cord segments. We further observed in vivo the adaptability of these 3D devices in a healthy physiological environment. Our study shows that 3D artificial scaffolds may drive local rewiring in vitro and hold great potential for the development of future in vivo interfaces. PMID:27453939

  12. Mesh saliency with adaptive local patches

    NASA Astrophysics Data System (ADS)

    Nouri, Anass; Charrier, Christophe; Lézoray, Olivier

    2015-03-01

    3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.

  13. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  14. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  15. LayTracks3D: A new approach for meshing general solids using medial axis transform

    SciTech Connect

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to the MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.

  16. Improving segmentation of 3D touching cell nuclei using flow tracking on surface meshes.

    PubMed

    Li, Gang; Guo, Lei

    2012-01-01

    Automatic segmentation of touching cell nuclei in 3D microscopy images is of great importance in bioimage informatics and computational biology. This paper presents a novel method for improving 3D touching cell nuclei segmentation. Given binary touching nuclei by the method in Li et al. (2007), our method herein consists of several steps: surface mesh reconstruction and curvature information estimation; direction field diffusion on surface meshes; flow tracking on surface meshes; and projection of surface mesh segmentation to volumetric images. The method is validated on both synthesised and real 3D touching cell nuclei images, demonstrating its validity and effectiveness.

  17. 3D Mesh Segmentation Based on Markov Random Fields and Graph Cuts

    NASA Astrophysics Data System (ADS)

    Shi, Zhenfeng; Le, Dan; Yu, Liyang; Niu, Xiamu

    3D Mesh segmentation has become an important research field in computer graphics during the past few decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. However, only a few algorithms based on Markov Random Field (MRF) has been presented for 3D object segmentation. In this letter, we present a definition of mesh segmentation according to the labeling problem. Inspired by the capability of MRF combining the geometric information and the topology information of a 3D mesh, we propose a novel 3D mesh segmentation model based on MRF and Graph Cuts. Experimental results show that our MRF-based schema achieves an effective segmentation.

  18. Hex-dominant mesh generation using 3D constrained triangulation

    SciTech Connect

    OWEN,STEVEN J.

    2000-05-30

    A method for decomposing a volume with a prescribed quadrilateral surface mesh, into a hexahedral-dominated mesh is proposed. With this method, known as Hex-Morphing (H-Morph), an initial tetrahedral mesh is provided. Tetrahedral are transformed and combined starting from the boundary and working towards the interior of the volume. The quadrilateral faces of the hexahedra are treated as internal surfaces, which can be recovered using constrained triangulation techniques. Implementation details of the edge and face recovery process are included. Examples and performance of the H-Morph algorithm are also presented.

  19. A 3-D upwind Euler solver for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    1991-01-01

    A three-dimensional finite-volume upwind Euler solver is developed for unstructured meshes. The finite-volume scheme solves for solution variables at vertices of the mesh and satisfies the integral conservation law on nonoverlapping polyhedral control volumes surrounding vertices of the mesh. The schene achieves improved solution accuracy by assuming a piecewise linear variation of the solution in each control volume. This improved spatial accuracy hinges heavily upon the calculation of the solution gradient in each control volume given pointwise values of the solution at vertices of the mesh. Several algorithms are discussed for obtaining these gradients. Details concerning implementation procedures and data structures are discussed. Sample calculations for inviscid Euler flow about isolated aircraft wings at subsonic and transonic speeds are compared with established Euler solvers as well as experiment.

  20. Adaptive Mesh Refinement for ICF Calculations

    NASA Astrophysics Data System (ADS)

    Fyfe, David

    2005-10-01

    This paper describes our use of the package PARAMESH to create an Adaptive Mesh Refinement (AMR) version of NRL's FASTRAD3D code. PARAMESH was designed to create an MPI-based AMR code from a block structured serial code such as FASTRAD3D. FASTRAD3D is a compressible hydrodynamics code containing the physical effects relevant for the simulation of high-temperature plasmas including inertial confinement fusion (ICF) Rayleigh-Taylor unstable direct drive laser targets. These effects include inverse bremmstrahlung laser energy absorption, classical flux-limited Spitzer thermal conduction, real (table look-up) equation-of-state with either separate or identical electron and ion temperatures, multi-group variable Eddington radiation transport, and multi-group alpha particle transport and thermonuclear burn. Numerically, this physics requires an elliptic solver and a ray tracing approach on the AMR grid, which is the main subject of this paper. A sample ICF calculation will be presented. MacNeice et al., ``PARAMESH: A parallel adaptive mesh refinement community tool,'' Computer Physics Communications, 126 (2000), pp. 330-354.

  1. Shape design sensitivities using fully automatic 3-D mesh generation

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  2. A hierarchical structure for automatic meshing and adaptive FEM analysis

    NASA Technical Reports Server (NTRS)

    Kela, Ajay; Saxena, Mukul; Perucchio, Renato

    1987-01-01

    A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.

  3. Mesh Convolutional Restricted Boltzmann Machines for Unsupervised Learning of Features With Structure Preservation on 3-D Meshes.

    PubMed

    Han, Zhizhong; Liu, Zhenbao; Han, Junwei; Vong, Chi-Man; Bu, Shuhui; Chen, Chun Long Philip

    2016-06-30

    Discriminative features of 3-D meshes are significant to many 3-D shape analysis tasks. However, handcrafted descriptors and traditional unsupervised 3-D feature learning methods suffer from several significant weaknesses: 1) the extensive human intervention is involved; 2) the local and global structure information of 3-D meshes cannot be preserved, which is in fact an important source of discriminability; 3) the irregular vertex topology and arbitrary resolution of 3-D meshes do not allow the direct application of the popular deep learning models; 4) the orientation is ambiguous on the mesh surface; and 5) the effect of rigid and nonrigid transformations on 3-D meshes cannot be eliminated. As a remedy, we propose a deep learning model with a novel irregular model structure, called mesh convolutional restricted Boltzmann machines (MCRBMs). MCRBM aims to simultaneously learn structure-preserving local and global features from a novel raw representation, local function energy distribution. In addition, multiple MCRBMs can be stacked into a deeper model, called mesh convolutional deep belief networks (MCDBNs). MCDBN employs a novel local structure preserving convolution (LSPC) strategy to convolve the geometry and the local structure learned by the lower MCRBM to the upper MCRBM. LSPC facilitates resolving the challenging issue of the orientation ambiguity on the mesh surface in MCDBN. Experiments using the proposed MCRBM and MCDBN were conducted on three common aspects: global shape retrieval, partial shape retrieval, and shape correspondence. Results show that the features learned by the proposed methods outperform the other state-of-the-art 3-D shape features.

  4. 3D unstructured-mesh radiation transport codes

    SciTech Connect

    Morel, J.

    1997-12-31

    Three unstructured-mesh radiation transport codes are currently being developed at Los Alamos National Laboratory. The first code is ATTILA, which uses an unstructured tetrahedral mesh in conjunction with standard Sn (discrete-ordinates) angular discretization, standard multigroup energy discretization, and linear-discontinuous spatial differencing. ATTILA solves the standard first-order form of the transport equation using source iteration in conjunction with diffusion-synthetic acceleration of the within-group source iterations. DANTE is designed to run primarily on workstations. The second code is DANTE, which uses a hybrid finite-element mesh consisting of arbitrary combinations of hexahedra, wedges, pyramids, and tetrahedra. DANTE solves several second-order self-adjoint forms of the transport equation including the even-parity equation, the odd-parity equation, and a new equation called the self-adjoint angular flux equation. DANTE also offers three angular discretization options: $S{_}n$ (discrete-ordinates), $P{_}n$ (spherical harmonics), and $SP{_}n$ (simplified spherical harmonics). DANTE is designed to run primarily on massively parallel message-passing machines, such as the ASCI-Blue machines at LANL and LLNL. The third code is PERICLES, which uses the same hybrid finite-element mesh as DANTE, but solves the standard first-order form of the transport equation rather than a second-order self-adjoint form. DANTE uses a standard $S{_}n$ discretization in angle in conjunction with trilinear-discontinuous spatial differencing, and diffusion-synthetic acceleration of the within-group source iterations. PERICLES was initially designed to run on workstations, but a version for massively parallel message-passing machines will be built. The three codes will be described in detail and computational results will be presented.

  5. Isoparametric 3-D Finite Element Mesh Generation Using Interactive Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kayrak, C.; Ozsoy, T.

    1985-01-01

    An isoparametric 3-D finite element mesh generator was developed with direct interface to an interactive geometric modeler program called POLYGON. POLYGON defines the model geometry in terms of boundaries and mesh regions for the mesh generator. The mesh generator controls the mesh flow through the 2-dimensional spans of regions by using the topological data and defines the connectivity between regions. The program is menu driven and the user has a control of element density and biasing through the spans and can also apply boundary conditions, loads interactively.

  6. A 3D moving mesh Finite Element Method for two-phase flows

    NASA Astrophysics Data System (ADS)

    Anjos, G. R.; Borhani, N.; Mangiavacchi, N.; Thome, J. R.

    2014-08-01

    A 3D ALE Finite Element Method is developed to study two-phase flow phenomena using a new discretization method to compute the surface tension forces. The computational method is based on the Arbitrary Lagrangian-Eulerian formulation (ALE) and the Finite Element Method (FEM), creating a two-phase method with an improved model for the liquid-gas interface. An adaptive mesh update procedure is also proposed for effective management of the mesh to remove, add and repair elements, since the computational mesh nodes move according to the flow. The ALE description explicitly defines the two-phase interface position by a set of interconnected nodes which ensures a sharp representation of the boundary, including the role of the surface tension. The proposed methodology for computing the curvature leads to accurate results with moderate programming effort and computational cost. Static and dynamic tests have been carried out to validate the method and the results have compared well to analytical solutions and experimental results found in the literature, demonstrating that the new proposed methodology provides good accuracy to describe the interfacial forces and bubble dynamics. This paper focuses on the description of the proposed methodology, with particular emphasis on the discretization of the surface tension force, the new remeshing technique, and the validation results. Additionally, a microchannel simulation in complex geometry is presented for two elongated bubbles.

  7. Joint synchronization and high capacity data hiding for 3D meshes

    NASA Astrophysics Data System (ADS)

    Itier, Vincent; Puech, William; Gesquière, Gilles; Pedeboy, Jean-Pierre

    2015-03-01

    Three-dimensional (3-D) meshes are already profusely used in lot of domains. In this paper, we propose a new high capacity data hiding scheme for vertex cloud. Our approach is based on very small displacements of vertices, that produce very low distortion of the mesh. Moreover this method can embed three bits per vertex relying only on the geometry of the mesh. As an application, we show how we embed a large binary logo for copyright purpose.

  8. Iterative Mesh Transformation for 3D Segmentation of Livers with Cancers in CT Images

    PubMed Central

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-01-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semiautomated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases. PMID:25728595

  9. Iterative mesh transformation for 3D segmentation of livers with cancers in CT images.

    PubMed

    Lu, Difei; Wu, Yin; Harris, Gordon; Cai, Wenli

    2015-07-01

    Segmentation of diseased liver remains a challenging task in clinical applications due to the high inter-patient variability in liver shapes, sizes and pathologies caused by cancers or other liver diseases. In this paper, we present a multi-resolution mesh segmentation algorithm for 3D segmentation of livers, called iterative mesh transformation that deforms the mesh of a region-of-interest (ROI) in a progressive manner by iterations between mesh transformation and contour optimization. Mesh transformation deforms the 3D mesh based on the deformation transfer model that searches the optimal mesh based on the affine transformation subjected to a set of constraints of targeting vertices. Besides, contour optimization searches the optimal transversal contours of the ROI by applying the dynamic-programming algorithm to the intersection polylines of the 3D mesh on 2D transversal image planes. The initial constraint set for mesh transformation can be defined by a very small number of targeting vertices, namely landmarks, and progressively updated by adding the targeting vertices selected from the optimal transversal contours calculated in contour optimization. This iterative 3D mesh transformation constrained by 2D optimal transversal contours provides an efficient solution to a progressive approximation of the mesh of the targeting ROI. Based on this iterative mesh transformation algorithm, we developed a semi-automated scheme for segmentation of diseased livers with cancers using as little as five user-identified landmarks. The evaluation study demonstrates that this semi-automated liver segmentation scheme can achieve accurate and reliable segmentation results with significant reduction of interaction time and efforts when dealing with diseased liver cases.

  10. LayTracks3D: A new approach for meshing general solids using medial axis transform

    DOE PAGES

    Quadros, William Roshan

    2015-08-22

    This study presents an extension of the all-quad meshing algorithm called LayTracks to generate high quality hex-dominant meshes of general solids. LayTracks3D uses the mapping between the Medial Axis (MA) and the boundary of the 3D domain to decompose complex 3D domains into simpler domains called Tracks. Tracks in 3D have no branches and are symmetric, non-intersecting, orthogonal to the boundary, and the shortest path from the MA to the boundary. These properties of tracks result in desired meshes with near cube shape elements at the boundary, structured mesh along the boundary normal with any irregular nodes restricted to themore » MA, and sharp boundary feature preservation. The algorithm has been tested on a few industrial CAD models and hex-dominant meshes are shown in the Results section. Work is underway to extend LayTracks3D to generate all-hex meshes.« less

  11. An Adaptive Mesh Algorithm: Mesh Structure and Generation

    SciTech Connect

    Scannapieco, Anthony J.

    2016-06-21

    The purpose of Adaptive Mesh Refinement is to minimize spatial errors over the computational space not to minimize the number of computational elements. The additional result of the technique is that it may reduce the number of computational elements needed to retain a given level of spatial accuracy. Adaptive mesh refinement is a computational technique used to dynamically select, over a region of space, a set of computational elements designed to minimize spatial error in the computational model of a physical process. The fundamental idea is to increase the mesh resolution in regions where the physical variables are represented by a broad spectrum of modes in k-space, hence increasing the effective global spectral coverage of those physical variables. In addition, the selection of the spatially distributed elements is done dynamically by cyclically adjusting the mesh to follow the spectral evolution of the system. Over the years three types of AMR schemes have evolved; block, patch and locally refined AMR. In block and patch AMR logical blocks of various grid sizes are overlaid to span the physical space of interest, whereas in locally refined AMR no logical blocks are employed but locally nested mesh levels are used to span the physical space. The distinction between block and patch AMR is that in block AMR the original blocks refine and coarsen entirely in time, whereas in patch AMR the patches change location and zone size with time. The type of AMR described herein is a locally refi ned AMR. In the algorithm described, at any point in physical space only one zone exists at whatever level of mesh that is appropriate for that physical location. The dynamic creation of a locally refi ned computational mesh is made practical by a judicious selection of mesh rules. With these rules the mesh is evolved via a mesh potential designed to concentrate the nest mesh in regions where the physics is modally dense, and coarsen zones in regions where the physics is modally

  12. Adaptive mesh refinement in titanium

    SciTech Connect

    Colella, Phillip; Wen, Tong

    2005-01-21

    In this paper, we evaluate Titanium's usability as a high-level parallel programming language through a case study, where we implement a subset of Chombo's functionality in Titanium. Chombo is a software package applying the Adaptive Mesh Refinement methodology to numerical Partial Differential Equations at the production level. In Chombo, the library approach is used to parallel programming (C++ and Fortran, with MPI), whereas Titanium is a Java dialect designed for high-performance scientific computing. The performance of our implementation is studied and compared with that of Chombo in solving Poisson's equation based on two grid configurations from a real application. Also provided are the counts of lines of code from both sides.

  13. Parallel tetrahedral mesh adaptation with dynamic load balancing

    SciTech Connect

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    2000-06-28

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D-TAG, using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However, performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region, creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D-TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  14. Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.

    1999-01-01

    The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.

  15. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  16. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  17. Adaptive fuzzy system for 3-D vision

    NASA Technical Reports Server (NTRS)

    Mitra, Sunanda

    1993-01-01

    An adaptive fuzzy system using the concept of the Adaptive Resonance Theory (ART) type neural network architecture and incorporating fuzzy c-means (FCM) system equations for reclassification of cluster centers was developed. The Adaptive Fuzzy Leader Clustering (AFLC) architecture is a hybrid neural-fuzzy system which learns on-line in a stable and efficient manner. The system uses a control structure similar to that found in the Adaptive Resonance Theory (ART-1) network to identify the cluster centers initially. The initial classification of an input takes place in a two stage process; a simple competitive stage and a distance metric comparison stage. The cluster prototypes are then incrementally updated by relocating the centroid positions from Fuzzy c-Means (FCM) system equations for the centroids and the membership values. The operational characteristics of AFLC and the critical parameters involved in its operation are discussed. The performance of the AFLC algorithm is presented through application of the algorithm to the Anderson Iris data, and laser-luminescent fingerprint image data. The AFLC algorithm successfully classifies features extracted from real data, discrete or continuous, indicating the potential strength of this new clustering algorithm in analyzing complex data sets. The hybrid neuro-fuzzy AFLC algorithm will enhance analysis of a number of difficult recognition and control problems involved with Tethered Satellite Systems and on-orbit space shuttle attitude controller.

  18. A structured multi-block solution-adaptive mesh algorithm with mesh quality assessment

    NASA Technical Reports Server (NTRS)

    Ingram, Clint L.; Laflin, Kelly R.; Mcrae, D. Scott

    1995-01-01

    The dynamic solution adaptive grid algorithm, DSAGA3D, is extended to automatically adapt 2-D structured multi-block grids, including adaption of the block boundaries. The extension is general, requiring only input data concerning block structure, connectivity, and boundary conditions. Imbedded grid singular points are permitted, but must be prevented from moving in space. Solutions for workshop cases 1 and 2 are obtained on multi-block grids and illustrate both increased resolution of and alignment with the solution. A mesh quality assessment criteria is proposed to determine how well a given mesh resolves and aligns with the solution obtained upon it. The criteria is used to evaluate the grid quality for solutions of workshop case 6 obtained on both static and dynamically adapted grids. The results indicate that this criteria shows promise as a means of evaluating resolution.

  19. Blind robust watermarking schemes for copyright protection of 3D mesh objects.

    PubMed

    Zafeiriou, Stefanos; Tefas, Anastasios; Pitas, Ioannis

    2005-01-01

    In this paper, two novel methods suitable for blind 3D mesh object watermarking applications are proposed. The first method is robust against 3D rotation, translation, and uniform scaling. The second one is robust against both geometric and mesh simplification attacks. A pseudorandom watermarking signal is cast in the 3D mesh object by deforming its vertices geometrically, without altering the vertex topology. Prior to watermark embedding and detection, the object is rotated and translated so that its center of mass and its principal component coincide with the origin and the z-axis of the Cartesian coordinate system. This geometrical transformation ensures watermark robustness to translation and rotation. Robustness to uniform scaling is achieved by restricting the vertex deformations to occur only along the r coordinate of the corresponding (r, theta, phi) spherical coordinate system. In the first method, a set of vertices that correspond to specific angles theta is used for watermark embedding. In the second method, the samples of the watermark sequence are embedded in a set of vertices that correspond to a range of angles in the theta domain in order to achieve robustness against mesh simplifications. Experimental results indicate the ability of the proposed method to deal with the aforementioned attacks.

  20. Triangular mesh establishment of 3D laser scanning data based on ellipsoidal projection

    NASA Astrophysics Data System (ADS)

    Zheng, De-hua; Xu, Jia; Li, Jia; Wang, Xin-sen

    2011-10-01

    The establishment of high quality triangular mesh is one of the key steps in 3D laser scanning data processing. Traditional triangulation algorithms have been proposed directly on the basis of adjacency relation between points in 3D space. However, when the point density is non-uniform or the noise exists, the problems such as surface hole, dough sheet overlapping and inconsistent normal appear easily. In this paper, a triangular mesh establishing algorithm based on ellipsoidal projection is proposed. After comparing the theory of ellipsoidal projection and cylindrical projection, the proposed triangular mesh establishing algorithm is analyzed in detail including basic idea and implementation method. To evaluate the performance and efficiency of the proposed algorithm, two experiments are then carried out on the 3D point cloud data of a foundation pit. The results indicate that though the computational efficiency of proposed algorithm is a little inferior to the algorithm based on cylindrical projection, the proposed algorithm is more effective for establishing point cloud of both top and bottom of the object and the original topological relation of 3D scanning points can be maintained better.

  1. Polymer-based mesh as supports for multi-layered 3D cell culture and assays.

    PubMed

    Simon, Karen A; Park, Kyeng Min; Mosadegh, Bobak; Subramaniam, Anand Bala; Mazzeo, Aaron D; Ngo, Philip M; Whitesides, George M

    2014-01-01

    Three-dimensional (3D) culture systems can mimic certain aspects of the cellular microenvironment found in vivo, but generation, analysis and imaging of current model systems for 3D cellular constructs and tissues remain challenging. This work demonstrates a 3D culture system-Cells-in-Gels-in-Mesh (CiGiM)-that uses stacked sheets of polymer-based mesh to support cells embedded in gels to form tissue-like constructs; the stacked sheets can be disassembled by peeling the sheets apart to analyze cultured cells-layer-by-layer-within the construct. The mesh sheets leave openings large enough for light to pass through with minimal scattering, and thus allowing multiple options for analysis-(i) using straightforward analysis by optical light microscopy, (ii) by high-resolution analysis with fluorescence microscopy, or (iii) with a fluorescence gel scanner. The sheets can be patterned into separate zones with paraffin film-based decals, in order to conduct multiple experiments in parallel; the paraffin-based decal films also block lateral diffusion of oxygen effectively. CiGiM simplifies the generation and analysis of 3D culture without compromising throughput, and quality of the data collected: it is especially useful in experiments that require control of oxygen levels, and isolation of adjacent wells in a multi-zone format.

  2. Polymer-Based Mesh as Supports for Multi-layered 3D Cell Culture and Assays

    PubMed Central

    Simon, Karen A.; Park, Kyeng Min; Mosadegh, Bobak; Subramaniam, Anand Bala; Mazzeo, Aaron; Ngo, Phil M.; Whitesides, George M.

    2013-01-01

    Three-dimensional (3D) culture systems can mimic certain aspects of the cellular microenvironment found in vivo, but generation, analysis and imaging of current model systems for 3D cellular constructs and tissues remain challenging. This work demonstrates a 3D culture system – Cells-in-Gels-in-Mesh (CiGiM) – that uses stacked sheets of polymer-based mesh to support cells embedded in gels to form tissue-like constructs; the stacked sheets can be disassembled by peeling the sheets apart to analyze cultured cells—layer-by-layer—within the construct. The mesh sheets leave openings large enough for light to pass through with minimal scattering, and thus allowing multiple options for analysis—(i) using straightforward analysis by optical light microscopy, (ii) by high-resolution analysis with fluorescence microscopy, or (iii) with a fluorescence gel scanner. The sheets can be patterned into separate zones with paraffin film-based decals, in order to conduct multiple experiments in parallel; the paraffin-based decal films also block lateral diffusion of oxygen effectively. CiGiM simplifies the generation and analysis of 3D culture without compromising throughput, and quality of the data collected: it is especially useful in experiments that require control of oxygen levels, and isolation of adjacent wells in a multi-zone format. PMID:24095253

  3. Fruit bruise detection based on 3D meshes and machine learning technologies

    NASA Astrophysics Data System (ADS)

    Hu, Zilong; Tang, Jinshan; Zhang, Ping

    2016-05-01

    This paper studies bruise detection in apples using 3-D imaging. Bruise detection based on 3-D imaging overcomes many limitations of bruise detection based on 2-D imaging, such as low accuracy, sensitive to light condition, and so on. In this paper, apple bruise detection is divided into two parts: feature extraction and classification. For feature extraction, we use a framework that can directly extract local binary patterns from mesh data. For classification, we studies support vector machine. Bruise detection using 3-D imaging is compared with bruise detection using 2-D imaging. 10-fold cross validation is used to evaluate the performance of the two systems. Experimental results show that bruise detection using 3-D imaging can achieve better classification accuracy than bruise detection based on 2-D imaging.

  4. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  5. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  6. 3D unstructured mesh ALE hydrodynamics with the upwind discontinuous galerkin method

    SciTech Connect

    Kershaw, D S; Milovich, J L; Prasad, M K; Shaw, M J; Shestakov, A I

    1999-05-07

    The authors describe a numerical scheme to solve 3D Arbitrary Lagrangian-Eulerian (ALE) hydrodynamics on an unstructured mesh using a discontinuous Galerkin method (DGM) and an explicit Runge-Kutta time discretization. Upwinding is achieved through Roe's linearized Riemann solver with the Harten-Hyman entropy fix. For stabilization, a 3D quadratic programming generalization of van Leer's 1D minmod slope limiter is used along with a Lapidus type artificial viscosity. This DGM scheme has been tested on a variety of hydrodynamic test problems and appears to be robust making it the basis for the integrated 3D inertial confinement fusion modeling code (ICF3D). For efficient code development, they use C++ object oriented programming to easily separate the complexities of an unstructured mesh from the basic physics modules. ICF3D is fully parallelized using domain decomposition and the MPI message passing library. It is fully portable. It runs on uniprocessor workstations and massively parallel platforms with distributed and shared memory.

  7. Rare meshes FEM scheme for quasi-stationary electromagnetic fields determination 3D problems

    NASA Astrophysics Data System (ADS)

    Chekmarev, D. T.; Kalinin, A. V.; Sadovsky, V. V.; Tiukhtina, A. A.

    2016-11-01

    The initial-boundary value problem for the quasi-stationary magnetic approximation of the Maxwell equations in inhomogeneous media is studied. The considered problem is reduced to the variational problem of determining the vector magnetic potential. The special gauge for vector magnetic and scalar electrical potentials is used. The well-posedness of the problems is established under general conditions on the coefficients and the applicability of the projection methods for these problems is validated. For the numerical solution of this problem provides to use the effective rare mesh FEM scheme for 3D problems. This scheme is well- proven in 3D elasticity and plasticity problems solving.

  8. Semantic segmentation of 3D textured meshes for urban scene analysis

    NASA Astrophysics Data System (ADS)

    Rouhani, Mohammad; Lafarge, Florent; Alliez, Pierre

    2017-01-01

    Classifying 3D measurement data has become a core problem in photogrammetry and 3D computer vision, since the rise of modern multiview geometry techniques, combined with affordable range sensors. We introduce a Markov Random Field-based approach for segmenting textured meshes generated via multi-view stereo into urban classes of interest. The input mesh is first partitioned into small clusters, referred to as superfacets, from which geometric and photometric features are computed. A random forest is then trained to predict the class of each superfacet as well as its similarity with the neighboring superfacets. Similarity is used to assign the weights of the Markov Random Field pairwise-potential and to account for contextual information between the classes. The experimental results illustrate the efficacy and accuracy of the proposed framework.

  9. Metal-mesh based transparent electrode on a 3-D curved surface by electrohydrodynamic jet printing

    NASA Astrophysics Data System (ADS)

    Seong, Baekhoon; Yoo, Hyunwoong; Dat Nguyen, Vu; Jang, Yonghee; Ryu, Changkook; Byun, Doyoung

    2014-09-01

    Invisible Ag mesh transparent electrodes (TEs), with a width of 7 μm, were prepared on a curved glass surface by electrohydrodynamic (EHD) jet printing. With a 100 μm pitch, the EHD jet printed the Ag mesh on the convex glass which had a sheet resistance of 1.49 Ω/□. The printing speed was 30 cm s-1 using Ag ink, which had a 10 000 cPs viscosity and a 70 wt% Ag nanoparticle concentration. We further showed the performance of a 3-D transparent heater using the Ag mesh transparent electrode. The EHD jet printed an invisible Ag grid transparent electrode with good electrical and optical properties with promising applications on printed optoelectronic devices.

  10. Unstructured 3D Delaunay mesh generation applied to planes, trains and automobiles

    NASA Technical Reports Server (NTRS)

    Blake, Kenneth R.; Spragle, Gregory S.

    1993-01-01

    Technical issues associated with domain-tessellation production, including initial boundary node triangulation and volume mesh refinement, are presented for the 'TGrid' 3D Delaunay unstructured grid generation program. The approach employed is noted to be capable of preserving predefined triangular surface facets in the final tessellation. The capabilities of the approach are demonstrated by generating grids about an entire fighter aircraft configuration, a train, and a wind tunnel model of an automobile.

  11. An adaptive learning approach for 3-D surface reconstruction from point clouds.

    PubMed

    Junior, Agostinho de Medeiros Brito; Neto, Adrião Duarte Dória; de Melo, Jorge Dantas; Goncalves, Luiz Marcos Garcia

    2008-06-01

    In this paper, we propose a multiresolution approach for surface reconstruction from clouds of unorganized points representing an object surface in 3-D space. The proposed method uses a set of mesh operators and simple rules for selective mesh refinement, with a strategy based on Kohonen's self-organizing map (SOM). Basically, a self-adaptive scheme is used for iteratively moving vertices of an initial simple mesh in the direction of the set of points, ideally the object boundary. Successive refinement and motion of vertices are applied leading to a more detailed surface, in a multiresolution, iterative scheme. Reconstruction was experimented on with several point sets, including different shapes and sizes. Results show generated meshes very close to object final shapes. We include measures of performance and discuss robustness.

  12. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of

  13. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  14. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  15. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  16. Dubai 3d Textuerd Mesh Using High Quality Resolution Vertical/oblique Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Tayeb Madani, Adib; Ziad Ahmad, Abdullateef; Christoph, Lueken; Hammadi, Zamzam; Manal Abdullah Sabeal, Manal Abdullah x.

    2016-06-01

    Providing high quality 3D data with reasonable quality and cost were always essential, affording the core data and foundation for developing an information-based decision-making tool of urban environments with the capability of providing decision makers, stakeholders, professionals, and public users with 3D views and 3D analysis tools of spatial information that enables real-world views. Helps and assist in improving users' orientation and also increase their efficiency in performing their tasks related to city planning, Inspection, infrastructures, roads, and cadastre management. In this paper, the capability of multi-view Vexcel UltraCam Osprey camera images is examined to provide a 3D model of building façades using an efficient image-based modeling workflow adopted by commercial software's. The main steps of this work include: Specification, point cloud generation, and 3D modeling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on the images to generate point cloud. Then, a mesh model of points is calculated using and refined to obtain an accurate model of buildings. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough LoD2 details of the building based on visual assessment. The objective of this paper is neither comparing nor promoting a specific technique over the other and does not mean to promote a sensor-based system over another systems or mechanism presented in existing or previous paper. The idea is to share experience.

  17. TRANSL8GDECIM8. Data Translation and Filtering for Large 3D Triangle Mesh Models

    SciTech Connect

    Janucik, F.X.; Ross, D.M.

    1993-09-01

    The TRANSL8GDECIM8 system consists of two programs: TRANSL8G and DECIM8. The TRANSL8G program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model enhancement features include common vertex joining, consistent triangle vertex ordering, vertex normal vector averaging, and triangle strip generation. Many of the traditional O(n squared) algorithms required to provide the above features have been recast and are O(n) which support large mesh sizes. The DECIM8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent three dimensional (3D) models of geometry, scientific visualization results, and discretely sampled data. The algorithm uses a combined incremental and iterative strategy. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations.

  18. An Adaptive Mesh Algorithm: Mapping the Mesh Variables

    SciTech Connect

    Scannapieco, Anthony J.

    2016-07-25

    Both thermodynamic and kinematic variables must be mapped. The kinematic variables are defined on a separate kinematic mesh; it is the duel mesh to the thermodynamic mesh. The map of the kinematic variables is done by calculating the contributions of kinematic variables on the old thermodynamic mesh, mapping the kinematic variable contributions onto the new thermodynamic mesh and then synthesizing the mapped kinematic variables on the new kinematic mesh. In this document the map of the thermodynamic variables will be described.

  19. An overset mesh approach for 3D mixed element high-order discretizations

    NASA Astrophysics Data System (ADS)

    Brazell, Michael J.; Sitaraman, Jayanarayanan; Mavriplis, Dimitri J.

    2016-10-01

    A parallel high-order Discontinuous Galerkin (DG) method is used to solve the compressible Navier-Stokes equations in an overset mesh framework. The DG solver has many capabilities including: hp-adaption, curved cells, support for hybrid, mixed-element meshes, and moving meshes. Combining these capabilities with overset grids allows the DG solver to be used in problems with bodies in relative motion and in a near-body off-body solver strategy. The overset implementation is constructed to preserve the design accuracy of the baseline DG discretization. Multiple simulations are carried out to validate the accuracy and performance of the overset DG solver. These simulations demonstrate the capability of the high-order DG solver to handle complex geometry and large scale parallel simulations in an overset framework.

  20. An efficient 3D traveltime calculation using coarse-grid mesh for shallow-depth source

    NASA Astrophysics Data System (ADS)

    Son, Woohyun; Pyun, Sukjoon; Lee, Ho-Young; Koo, Nam-Hyung; Shin, Changsoo

    2016-10-01

    3D Kirchhoff pre-stack depth migration requires an efficient algorithm to compute first-arrival traveltimes. In this paper, we exploited a wave-equation-based traveltime calculation algorithm, which is called the suppressed wave equation estimation of traveltime (SWEET), and the equivalent source distribution (ESD) algorithm. The motivation of using the SWEET algorithm is to solve the Laplace-domain wave equation using coarse grid spacing to calculate first-arrival traveltimes. However, if a real source is located at shallow-depth close to free surface, we cannot accurately calculate the wavefield using coarse grid spacing. So, we need an additional algorithm to correctly simulate the shallow source even for the coarse grid mesh. The ESD algorithm is a method to define a set of distributed nodal sources that approximate a point source at the inter-nodal location in a velocity model with large grid spacing. Thanks to the ESD algorithm, we can efficiently calculate the first-arrival traveltimes of waves emitted from shallow source point even when we solve the Laplace-domain wave equation using a coarse-grid mesh. The proposed algorithm is applied to the SEG/EAGE 3D salt model. From the result, we note that the combination of SWEET and ESD algorithms can be successfully used for the traveltime calculation under the condition of a shallow-depth source. We also confirmed that our algorithm using coarse-grid mesh requires less computational time than the conventional SWEET algorithm using relatively fine-grid mesh.

  1. Refining 3D Earth models by unifying geological and geophysical information on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Lelièvre, P. G.; Carter-McAuslan, A.; Tycholiz, C.; Farquharson, C. G.; Hurich, C. A.

    2012-04-01

    Earth models used for mineral exploration or other subsurface investigations should be consistent with all available geological and geophysical information. Geophysical inversion provides the means to integrate geological information, geophysical survey data, and physical property measurements taken on rock samples. Incorporation of geological information into inversions is always an iterative process. One begins with the geologists' best guess about the Earth (i.e. the geological model) and the models recovered from geophysical inversion may indicate that the geological model should be changed slightly prior to the next iteration of the procedure. In this way, geological and geophysical data can be combined through inversion and we can move towards the creation of a common Earth model consistent with all the available data. As more information is incorporated, the inherent non-uniqueness of the inverse problem is reduced, yielding a higher potential to resolve deeper features that are less well-constrained by the geophysical data alone. Geological ore deposit models are commonly created during delineation drilling. The accuracy of these models is crucial when used to determine if a deposit is economic. 3D geological Earth models typically comprise wireframe surfaces that represent the geological contacts between different rock units. The contacts may be known at points from down-hole intersections and surface mapping, and can be interpolated between boreholes and extrapolated outwards. Contacts may also be interpreted from seismic traces. Wireframe surfaces, comprising tessellated triangular facets, are sufficiently flexible to allow the representation of arbitrarily complicated geological structures. These surfaces can be honoured exactly within fully unstructured 3D volumetric tetrahedral meshes. In contrast, geophysical forward modelling and inversion algorithms typically work with rectilinear meshes when parameterizing the subsurface because this simplifies

  2. Adapting 3D Equilibrium Reconstruction to Reconstruct Weakly 3D H-mode Tokamaks

    NASA Astrophysics Data System (ADS)

    Cianciosa, M. R.; Hirshman, S. P.; Seal, S. K.; Unterberg, E. A.; Wilcox, R. S.; Wingen, A.; Hanson, J. D.

    2015-11-01

    The application of resonant magnetic perturbations for edge localized mode (ELM) mitigation breaks the toroidal symmetry of tokamaks. In these scenarios, the axisymmetric assumptions of the Grad-Shafranov equation no longer apply. By extension, equilibrium reconstruction tools, built around these axisymmetric assumptions, are insufficient to fully reconstruct a 3D perturbed equilibrium. 3D reconstruction tools typically work on systems where the 3D components of signals are a significant component of the input signals. In nominally axisymmetric systems, applied field perturbations can be on the order of 1% of the main field or less. To reconstruct these equilibria, the 3D component of signals must be isolated from the axisymmetric portions to provide the necessary information for reconstruction. This presentation will report on the adaptation to V3FIT for application on DIII-D H-mode discharges with applied resonant magnetic perturbations (RMPs). Newly implemented motional stark effect signals and modeling of electric field effects will also be discussed. Work supported under U.S. DOE Cooperative Agreement DE-AC05-00OR22725.

  3. 3-D adaptive nonlinear complex-diffusion despeckling filter.

    PubMed

    Rodrigues, Pedro; Bernardes, Rui

    2012-12-01

    This work aims to improve the process of speckle noise reduction while preserving edges and other relevant features through filter expansion from 2-D to 3-D. Despeckling is very important for data visual inspection and as a preprocessing step for other algorithms, as they are usually notably influenced by speckle noise. To that intent, a 3-D approach is proposed for the adaptive complex-diffusion filter. This 3-D iterative filter was applied to spectral-domain optical coherence tomography medical imaging volumes of the human retina and a quantitative evaluation of the results was performed to allow a demonstration of the better performance of the 3-D over the 2-D filtering and to choose the best total diffusion time. In addition, we propose a fast graphical processing unit parallel implementation so that the filter can be used in a clinical setting.

  4. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  5. Multigrid solution strategies for adaptive meshing problems

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1995-01-01

    This paper discusses the issues which arise when combining multigrid strategies with adaptive meshing techniques for solving steady-state problems on unstructured meshes. A basic strategy is described, and demonstrated by solving several inviscid and viscous flow cases. Potential inefficiencies in this basic strategy are exposed, and various alternate approaches are discussed, some of which are demonstrated with an example. Although each particular approach exhibits certain advantages, all methods have particular drawbacks, and the formulation of a completely optimal strategy is considered to be an open problem.

  6. Robust and Blind 3D Mesh Watermarking in Spatial Domain Based on Faces Categorization and Sorting

    NASA Astrophysics Data System (ADS)

    Molaei, Amir Masoud; Ebrahimnezhad, Hossein; Sedaaghi, Mohammad Hossein

    2016-06-01

    In this paper, a 3D watermarking algorithm in spatial domain is presented with blind detection. In the proposed method, a negligible visual distortion is observed in host model. Initially, a preprocessing is applied on the 3D model to make it robust against geometric transformation attacks. Then, a number of triangle faces are determined as mark triangles using a novel systematic approach in which faces are categorized and sorted robustly. In order to enhance the capability of information retrieval by attacks, block watermarks are encoded using Reed-Solomon block error-correcting code before embedding into the mark triangles. Next, the encoded watermarks are embedded in spherical coordinates. The proposed method is robust against additive noise, mesh smoothing and quantization attacks. Also, it is stout next to geometric transformation, vertices and faces reordering attacks. Moreover, the proposed algorithm is designed so that it is robust against the cropping attack. Simulation results confirm that the watermarked models confront very low distortion if the control parameters are selected properly. Comparison with other methods demonstrates that the proposed method has good performance against the mesh smoothing attacks.

  7. Robust Detection of Round Shaped Pits Lying on 3D Meshes: Application to Impact Crater Recognition

    NASA Astrophysics Data System (ADS)

    Schmidt, Martin-Pierre; Muscato, Jennifer; Viseur, Sophie; Jorda, Laurent; Bouley, Sylvain; Mari, Jean-Luc

    2015-04-01

    Most celestial bodies display impacts of collisions with asteroids and meteoroids. These traces are called craters. The possibility of observing and identifying these craters and their characteristics (radius, depth and morphology) is the only method available to measure the age of different units at the surface of the body, which in turn allows to constrain its conditions of formation. Interplanetary space probes always carry at least one imaging instrument on board. The visible images of the target are used to reconstruct high-resolution 3D models of its surface as a cloud of points in the case of multi-image dense stereo, or as a triangular mesh in the case of stereo and shape-from-shading. The goal of this work is to develop a methodology to automatically detect the craters lying on these 3D models. The robust extraction of feature areas on surface objects embedded in 3D, like circular pits, is a challenging problem. Classical approaches generally rely on image processing and template matching on a 2D flat projection of the 3D object (i.e.: a high-resolution photograph). In this work, we propose a full-3D method that mainly relies on curvature analysis. Mean and Gaussian curvatures are estimated on the surface. They are used to label vertices that belong to concave parts corresponding to specific pits on the surface. The surface is thus transformed into binary map distinguishing potential crater features to other types of features. Centers are located in the targeted surface regions, corresponding to potential crater features. Concentric rings are then built around the found centers. They consist in circular closed lines exclusively composed of edges of the initial mesh. The first built ring represents the nearest vertex neighborhood of the found center. The ring is then optimally expanded using a circularity constrain and the curvature values of the ring vertices. This method has been tested on a 3D model of the asteroid Lutetia observed by the ROSETTA (ESA

  8. Cartesian-cell based grid generation and adaptive mesh refinement

    NASA Technical Reports Server (NTRS)

    Coirier, William J.; Powell, Kenneth G.

    1993-01-01

    Viewgraphs on Cartesian-cell based grid generation and adaptive mesh refinement are presented. Topics covered include: grid generation; cell cutting; data structures; flow solver formulation; adaptive mesh refinement; and viscous flow.

  9. GRChombo: Numerical relativity with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Clough, Katy; Figueras, Pau; Finkel, Hal; Kunesch, Markus; Lim, Eugene A.; Tunyasuvunakool, Saran

    2015-12-01

    In this work, we introduce {\\mathtt{GRChombo}}: a new numerical relativity code which incorporates full adaptive mesh refinement (AMR) using block structured Berger-Rigoutsos grid generation. The code supports non-trivial ‘many-boxes-in-many-boxes’ mesh hierarchies and massive parallelism through the message passing interface. {\\mathtt{GRChombo}} evolves the Einstein equation using the standard BSSN formalism, with an option to turn on CCZ4 constraint damping if required. The AMR capability permits the study of a range of new physics which has previously been computationally infeasible in a full 3 + 1 setting, while also significantly simplifying the process of setting up the mesh for these problems. We show that {\\mathtt{GRChombo}} can stably and accurately evolve standard spacetimes such as binary black hole mergers and scalar collapses into black holes, demonstrate the performance characteristics of our code, and discuss various physics problems which stand to benefit from the AMR technique.

  10. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1994-01-01

    In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.

  11. Details of tetrahedral anisotropic mesh adaptation

    NASA Astrophysics Data System (ADS)

    Jensen, Kristian Ejlebjerg; Gorman, Gerard

    2016-04-01

    We have implemented tetrahedral anisotropic mesh adaptation using the local operations of coarsening, swapping, refinement and smoothing in MATLAB without the use of any for- N loops, i.e. the script is fully vectorised. In the process of doing so, we have made three observations related to details of the implementation: 1. restricting refinement to a single edge split per element not only simplifies the code, it also improves mesh quality, 2. face to edge swapping is unnecessary, and 3. optimising for the Vassilevski functional tends to give a little higher value for the mean condition number functional than optimising for the condition number functional directly. These observations have been made for a uniform and a radial shock metric field, both starting from a structured mesh in a cube. Finally, we compare two coarsening techniques and demonstrate the importance of applying smoothing in the mesh adaptation loop. The results pertain to a unit cube geometry, but we also show the effect of corners and edges by applying the implementation in a spherical geometry.

  12. Layout consistent segmentation of 3-D meshes via conditional random fields and spatial ordering constraints.

    PubMed

    Zouhar, Alexander; Baloch, Sajjad; Tsin, Yanghai; Fang, Tong; Fuchs, Siegfried

    2010-01-01

    We address the problem of 3-D Mesh segmentation for categories of objects with known part structure. Part labels are derived from a semantic interpretation of non-overlapping subsurfaces. Our approach models the label distribution using a Conditional Random Field (CRF) that imposes constraints on the relative spatial arrangement of neighboring labels, thereby ensuring semantic consistency. To this end, each label variable is associated with a rich shape descriptor that is intrinsic to the surface. Randomized decision trees and cross validation are employed for learning the model, which is eventually applied using graph cuts. The method is flexible enough for segmenting even geometrically less structured regions and is robust to local and global shape variations.

  13. Integration of Mesh Optimization with 3D All-Hex Mesh Generation, LDRD Subcase 3504340000, Final Report

    SciTech Connect

    KNUPP,PATRICK; MITCHELL,SCOTT A.

    1999-11-01

    In an attempt to automatically produce high-quality all-hex meshes, we investigated a mesh improvement strategy: given an initial poor-quality all-hex mesh, we iteratively changed the element connectivity, adding and deleting elements and nodes, and optimized the node positions. We found a set of hex reconnection primitives. We improved the optimization algorithms so they can untangle a negative-Jacobian mesh, even considering Jacobians on the boundary, and subsequently optimize the condition number of elements in an untangled mesh. However, even after applying both the primitives and optimization we were unable to produce high-quality meshes in certain regions. Our experiences suggest that many boundary configurations of quadrilaterals admit no hexahedral mesh with positive Jacobians, although we have no proof of this.

  14. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited

  15. On Adaptive Mesh Generation in Two-Dimensions

    SciTech Connect

    D'Azevedo, E.

    1999-10-11

    This work considers the effectiveness of using anisotropic coordinate transformation in adaptive mesh generation. The anisotropic coordinate transformation is derived by interpreting the Hessian matrix of the data function as a metric tensor that measures the local approximation error. The Hessian matrix contains information about the local curvature of the surface and gives guidance in the aspect ratio and orientation for mesh generation. Since theoretically, an asymptotically optimally efficient mesh can be produced by transforming a regular mesh of optimal shape elements, it would be interesting to compare this approach with existing techniques in solution adaptive meshes. PLTMG , a general elliptic solver, is used to generate solution adapted triangular meshes for comparison. The solver has the capability of performing a posteriori error estimates in performing longest edge refinement, vertex unrefinement and mesh smoothing. Numerical experiments on three simple problems suggest the methodology employed in PLTMG is effective in generating near optimally efficient meshes.

  16. Adaptive 3D Face Reconstruction from Unconstrained Photo Collections.

    PubMed

    Roth, Joseph; Tong, Yiying; Liu, Xiaoming

    2016-12-07

    Given a photo collection of "unconstrained" face images of one individual captured under a variety of unknown pose, expression, and illumination conditions, this paper presents a method for reconstructing a 3D face surface model of the individual along with albedo information. Unlike prior work on face reconstruction that requires large photo collections, we formulate an approach to adapt to photo collections with a high diversity in both the number of images and the image quality. To achieve this, we incorporate prior knowledge about face shape by fitting a 3D morphable model to form a personalized template, following by using a novel photometric stereo formulation to complete the fine details, under a coarse-to-fine scheme. Our scheme incorporates a structural similarity-based local selection step to help identify a common expression for reconstruction while discarding occluded portions of faces. The evaluation of reconstruction performance is through a novel quality measure, in the absence of ground truth 3D scans. Superior large-scale experimental results are reported on synthetic, Internet, and personal photo collections.

  17. Reduced order modelling of an unstructured mesh air pollution model and application in 2D/3D urban street canyons

    NASA Astrophysics Data System (ADS)

    Fang, F.; Zhang, T.; Pavlidis, D.; Pain, C. C.; Buchan, A. G.; Navon, I. M.

    2014-10-01

    A novel reduced order model (ROM) based on proper orthogonal decomposition (POD) has been developed for a finite-element (FE) adaptive mesh air pollution model. A quadratic expansion of the non-linear terms is employed to ensure the method remained efficient. This is the first time such an approach has been applied to air pollution LES turbulent simulation through three dimensional landscapes. The novelty of this work also includes POD's application within a FE-LES turbulence model that uses adaptive resolution. The accuracy of the reduced order model is assessed and validated for a range of 2D and 3D urban street canyon flow problems. By comparing the POD solutions against the fine detail solutions obtained from the full FE model it is shown that the accuracy is maintained, where fine details of the air flows are captured, whilst the computational requirements are reduced. In the examples presented below the size of the reduced order models is reduced by factors up to 2400 in comparison to the full FE model while the CPU time is reduced by up to 98% of that required by the full model.

  18. Simulation of metal forming processes with a 3D adaptive remeshing procedure

    NASA Astrophysics Data System (ADS)

    Zeramdini, Bessam; Robert, Camille; Germain, Guenael; Pottier, Thomas

    2016-10-01

    In this paper, a fully adaptive 3D numerical methodology based on a tetrahedral element was proposed in order to improve the finite element simulation of any metal forming process. This automatic methodology was implemented in a computational platform which integrates a finite element solver, 3D mesh generation and a field transfer algorithm. The proposed remeshing method was developed in order to solve problems associated with the severe distortion of elements subject to large deformations, to concentrate the elements where the error is large and to coarsen the mesh where the error is small. This leads to a significant reduction in the computation times while maintaining simulation accuracy. In addition, in order to enhance the contact conditions, this method has been coupled with a specific operator to maintain the initial contact between the workpiece nodes and the rigid tool after each remeshing step. In this paper special attention is paid to the data transfer methods and the necessary adaptive remeshing steps are given. Finally, a numerical example is detailed to demonstrate the efficiency of the approach and to compare the results for the different field transfer strategies.

  19. Current sheets, reconnection and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Marliani, Christiane

    1998-11-01

    Adaptive structured mesh refinement methods have proved to be an appropriate tool for the numerical study of a variety of problems where largely separated length scales are involved, e.g. [R. Grauer, C. Marliani, K. Germaschewski, PRL, 80, 4177, (1998)]. A typical example in plasma physics are the current sheets in magnetohydrodynamic flows. Their dynamics is investigated in the framework of incompressible MHD. We present simulations of the ideal and inviscid dynamics in two and three dimensions. In addition, we show numerical simulations for the resistive case in two dimensions. Specifically, we show simulations for the case of the doubly periodic coalescence instability. At the onset of the reconnection process the kinetic energy rises and drops rapidly and afterwards settles into an oscillatory phase. The timescale of the magnetic reconnection process is not affected by these fast events but consistent with the Sweet-Parker model of stationary reconnection. Taking into account the electron inertia terms in the generalized Ohm's law the electron skin depth is introduced as an additional parameter. The modified equations allow for magnetic reconnection in the collisionless regime. Current density and vorticity concentrate in extremely long and thin sheets. Their dynamics becomes numerically accessible by means of adaptive mesh refinement.

  20. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  1. EM modelling of arbitrary shaped anisotropic dielectric objects using an efficient 3D leapfrog scheme on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Gansen, A.; Hachemi, M. El; Belouettar, S.; Hassan, O.; Morgan, K.

    2016-09-01

    The standard Yee algorithm is widely used in computational electromagnetics because of its simplicity and divergence free nature. A generalization of the classical Yee scheme to 3D unstructured meshes is adopted, based on the use of a Delaunay primal mesh and its high quality Voronoi dual. This allows the problem of accuracy losses, which are normally associated with the use of the standard Yee scheme and a staircased representation of curved material interfaces, to be circumvented. The 3D dual mesh leapfrog-scheme which is presented has the ability to model both electric and magnetic anisotropic lossy materials. This approach enables the modelling of problems, of current practical interest, involving structured composites and metamaterials.

  2. Simulation of nonpoint source contamination based on adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Kourakos, G.; Harter, T.

    2014-12-01

    Contamination of groundwater aquifers from nonpoint sources is a worldwide problem. Typical agricultural groundwater basins receive contamination from a large array (in the order of ~10^5-6) of spatially and temporally heterogeneous sources such as fields, crops, dairies etc, while the received contaminants emerge at significantly uncertain time lags to a large array of discharge surfaces such as public supply, domestic and irrigation wells and streams. To support decision making in such complex regimes several approaches have been developed, which can be grouped into 3 categories: i) Index methods, ii)regression methods and iii) physically based methods. Among the three, physically based methods are considered more accurate, but at the cost of computational demand. In this work we present a physically based simulation framework which exploits the latest hardware and software developments to simulate large (>>1,000 km2) groundwater basins. First we simulate groundwater flow using a sufficiently detailed mesh to capture the spatial heterogeneity. To achieve optimal mesh quality we combine adaptive mesh refinement with the nonlinear solution for unconfined flow. Starting from a coarse grid the mesh is refined iteratively in the parts of the domain where the flow heterogeneity appears higher resulting in optimal grid. Secondly we simulate the nonpoint source pollution based on the detailed velocity field computed from the previous step. In our approach we use the streamline model where the 3D transport problem is decomposed into multiple 1D transport problems. The proposed framework is applied to simulate nonpoint source pollution in the Central Valley aquifer system, California.

  3. Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)

    NASA Astrophysics Data System (ADS)

    Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.

    2013-12-01

    Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity

  4. Visualization of Scalar Adaptive Mesh Refinement Data

    SciTech Connect

    VACET; Weber, Gunther; Weber, Gunther H.; Beckner, Vince E.; Childs, Hank; Ligocki, Terry J.; Miller, Mark C.; Van Straalen, Brian; Bethel, E. Wes

    2007-12-06

    Adaptive Mesh Refinement (AMR) is a highly effective computation method for simulations that span a large range of spatiotemporal scales, such as astrophysical simulations, which must accommodate ranges from interstellar to sub-planetary. Most mainstream visualization tools still lack support for AMR grids as a first class data type and AMR code teams use custom built applications for AMR visualization. The Department of Energy's (DOE's) Science Discovery through Advanced Computing (SciDAC) Visualization and Analytics Center for Enabling Technologies (VACET) is currently working on extending VisIt, which is an open source visualization tool that accommodates AMR as a first-class data type. These efforts will bridge the gap between general-purpose visualization applications and highly specialized AMR visual analysis applications. Here, we give an overview of the state of the art in AMR scalar data visualization research.

  5. Adaptive superposition of finite element meshes in linear and nonlinear dynamic analysis

    NASA Astrophysics Data System (ADS)

    Yue, Zhihua

    2005-11-01

    The numerical analysis of transient phenomena in solids, for instance, wave propagation and structural dynamics, is a very important and active area of study in engineering. Despite the current evolutionary state of modern computer hardware, practical analysis of large scale, nonlinear transient problems requires the use of adaptive methods where computational resources are locally allocated according to the interpolation requirements of the solution form. Adaptive analysis of transient problems involves obtaining solutions at many different time steps, each of which requires a sequence of adaptive meshes. Therefore, the execution speed of the adaptive algorithm is of paramount importance. In addition, transient problems require that the solution must be passed from one adaptive mesh to the next adaptive mesh with a bare minimum of solution-transfer error since this form of error compromises the initial conditions used for the next time step. A new adaptive finite element procedure (s-adaptive) is developed in this study for modeling transient phenomena in both linear elastic solids and nonlinear elastic solids caused by progressive damage. The adaptive procedure automatically updates the time step size and the spatial mesh discretization in transient analysis, achieving the accuracy and the efficiency requirements simultaneously. The novel feature of the s-adaptive procedure is the original use of finite element mesh superposition to produce spatial refinement in transient problems. The use of mesh superposition enables the s-adaptive procedure to completely avoid the need for cumbersome multipoint constraint algorithms and mesh generators, which makes the s-adaptive procedure extremely fast. Moreover, the use of mesh superposition enables the s-adaptive procedure to minimize the solution-transfer error. In a series of different solid mechanics problem types including 2-D and 3-D linear elastic quasi-static problems, 2-D material nonlinear quasi-static problems

  6. Elliptic Solvers for Adaptive Mesh Refinement Grids

    SciTech Connect

    Quinlan, D.J.; Dendy, J.E., Jr.; Shapira, Y.

    1999-06-03

    We are developing multigrid methods that will efficiently solve elliptic problems with anisotropic and discontinuous coefficients on adaptive grids. The final product will be a library that provides for the simplified solution of such problems. This library will directly benefit the efforts of other Laboratory groups. The focus of this work is research on serial and parallel elliptic algorithms and the inclusion of our black-box multigrid techniques into this new setting. The approach applies the Los Alamos object-oriented class libraries that greatly simplify the development of serial and parallel adaptive mesh refinement applications. In the final year of this LDRD, we focused on putting the software together; in particular we completed the final AMR++ library, we wrote tutorials and manuals, and we built example applications. We implemented the Fast Adaptive Composite Grid method as the principal elliptic solver. We presented results at the Overset Grid Conference and other more AMR specific conferences. We worked on optimization of serial and parallel performance and published several papers on the details of this work. Performance remains an important issue and is the subject of continuing research work.

  7. Tetrahedral and Hexahedral Mesh Adaptation for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Strawn, Roger C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    This paper presents two unstructured mesh adaptation schemes for problems in computational fluid dynamics. The procedures allow localized grid refinement and coarsening to efficiently capture aerodynamic flow features of interest. The first procedure is for purely tetrahedral grids; unfortunately, repeated anisotropic adaptation may significantly deteriorate the quality of the mesh. Hexahedral elements, on the other hand, can be subdivided anisotropically without mesh quality problems. Furthermore, hexahedral meshes yield more accurate solutions than their tetrahedral counterparts for the same number of edges. Both the tetrahedral and hexahedral mesh adaptation procedures use edge-based data structures that facilitate efficient subdivision by allowing individual edges to be marked for refinement or coarsening. However, for hexahedral adaptation, pyramids, prisms, and tetrahedra are used as buffer elements between refined and unrefined regions to eliminate hanging vertices. Computational results indicate that the hexahedral adaptation procedure is a viable alternative to adaptive tetrahedral schemes.

  8. Carpet: Adaptive Mesh Refinement for the Cactus Framework

    NASA Astrophysics Data System (ADS)

    Schnetter, Erik; Hawley, Scott; Hawke, Ian

    2016-11-01

    Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.

  9. Parallel, Gradient-Based Anisotropic Mesh Adaptation for Re-entry Vehicle Configurations

    NASA Technical Reports Server (NTRS)

    Bibb, Karen L.; Gnoffo, Peter A.; Park, Michael A.; Jones, William T.

    2006-01-01

    Two gradient-based adaptation methodologies have been implemented into the Fun3d refine GridEx infrastructure. A spring-analogy adaptation which provides for nodal movement to cluster mesh nodes in the vicinity of strong shocks has been extended for general use within Fun3d, and is demonstrated for a 70 sphere cone at Mach 2. A more general feature-based adaptation metric has been developed for use with the adaptation mechanics available in Fun3d, and is applicable to any unstructured, tetrahedral, flow solver. The basic functionality of general adaptation is explored through a case of flow over the forebody of a 70 sphere cone at Mach 6. A practical application of Mach 10 flow over an Apollo capsule, computed with the Felisa flow solver, is given to compare the adaptive mesh refinement with uniform mesh refinement. The examples of the paper demonstrate that the gradient-based adaptation capability as implemented can give an improvement in solution quality.

  10. A parallel adaptive mesh refinement algorithm

    NASA Technical Reports Server (NTRS)

    Quirk, James J.; Hanebutte, Ulf R.

    1993-01-01

    Over recent years, Adaptive Mesh Refinement (AMR) algorithms which dynamically match the local resolution of the computational grid to the numerical solution being sought have emerged as powerful tools for solving problems that contain disparate length and time scales. In particular, several workers have demonstrated the effectiveness of employing an adaptive, block-structured hierarchical grid system for simulations of complex shock wave phenomena. Unfortunately, from the parallel algorithm developer's viewpoint, this class of scheme is quite involved; these schemes cannot be distilled down to a small kernel upon which various parallelizing strategies may be tested. However, because of their block-structured nature such schemes are inherently parallel, so all is not lost. In this paper we describe the method by which Quirk's AMR algorithm has been parallelized. This method is built upon just a few simple message passing routines and so it may be implemented across a broad class of MIMD machines. Moreover, the method of parallelization is such that the original serial code is left virtually intact, and so we are left with just a single product to support. The importance of this fact should not be underestimated given the size and complexity of the original algorithm.

  11. DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks

    NASA Astrophysics Data System (ADS)

    Duffell, Paul C.

    2016-09-01

    This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.

  12. Functional response of osteoblasts in functionally gradient titanium alloy mesh arrays processed by 3D additive manufacturing.

    PubMed

    Nune, K C; Kumar, A; Misra, R D K; Li, S J; Hao, Y L; Yang, R

    2017-02-01

    We elucidate here the osteoblasts functions and cellular activity in 3D printed interconnected porous architecture of functionally gradient Ti-6Al-4V alloy mesh structures in terms of cell proliferation and growth, distribution of cell nuclei, synthesis of proteins (actin, vinculin, and fibronectin), and calcium deposition. Cell culture studies with pre-osteoblasts indicated that the interconnected porous architecture of functionally gradient mesh arrays was conducive to osteoblast functions. However, there were statistically significant differences in the cellular response depending on the pore size in the functionally gradient structure. The interconnected porous architecture contributed to the distribution of cells from the large pore size (G1) to the small pore size (G3), with consequent synthesis of extracellular matrix and calcium precipitation. The gradient mesh structure significantly impacted cell adhesion and influenced the proliferation stage, such that there was high distribution of cells on struts of the gradient mesh structure. Actin and vinculin showed a significant difference in normalized expression level of protein per cell, which was absent in the case of fibronectin. Osteoblasts present on mesh struts formed a confluent sheet, bridging the pores through numerous cytoplasmic extensions. The gradient mesh structure fabricated by electron beam melting was explored to obtain fundamental insights on cellular activity with respect to osteoblast functions.

  13. Frozen Rotor and Sliding Mesh Models Applied to the 3D Simulation of the Francis-99 Tokke Turbine with Code_Saturne

    NASA Astrophysics Data System (ADS)

    Tonello, N.; Eude, Y.; de Laage de Meux, B.; Ferrand, M.

    2017-01-01

    The steady-state operation of the Francis-99, Tokke turbine [1-3] has been simulated numerically at different loads using the open source, CAD and CFD software, SALOME [4] Code_Saturne [5]. The full 3D mesh of the Tokke turbine provided for the Second Francis-99 Workshop has been adapted and modified to work with the solver. Results are compared for the frozen-rotor and the unsteady, conservative sliding mesh approach over three operating points, showing that good agreement with the experimental data is obtained with both models without having to tune the CFD models for each operating point. Approaches to the simulation of transient operation are also presented with results of work in progress.

  14. High resolution finite volume parallel simulations of mould filling and binary alloy solidification on unstructured 3-D meshes

    SciTech Connect

    Reddy, A.V.; Kothe, D.B.; Lam, K.L.

    1997-06-01

    The Los Alamos National Laboratory (LANL) is currently developing a new casting simulation tool (known as Telluride) that employs robust, high-resolution finite volume algorithms for incompressible fluid flow, volume tracking of interfaces, and solidification physics on three-dimensional (3-D) unstructured meshes. Their finite volume algorithms are based on colocated cell-centered schemes that are formally second order in time and space. The flow algorithm is a 3-D extension of recent work on projection method solutions of the Navier-Stokes (NS) equations. Their volume tracking algorithm can accurately track topologically complex interfaces by approximating the interface geometry as piecewise planar. Coupled to their fluid flow algorithm is a comprehensive binary alloy solidification model that incorporates macroscopic descriptions of heat transfer, solute redistribution, and melt convection as well as a microscopic description of segregation. The finite volume algorithms, which are efficient, parallel, and robust, can yield high-fidelity solutions on a variety of meshes, ranging from those that are structured orthogonal to fully unstructured (finite element). The authors discuss key computer science issues that have enabled them to efficiently parallelize their unstructured mesh algorithms on both distributed and shared memory computing platforms. These include their functionally object-oriented use of Fortran 90 and new parallel libraries for gather/scatter functions (PGSLib) and solutions of linear systems of equations (JTpack90). Examples of their current capabilities are illustrated with simulations of mold filling and solidification of complex 3-D components currently being poured in LANL foundries.

  15. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  16. Compressible magma/mantle dynamics: 3-D, adaptive simulations in ASPECT

    NASA Astrophysics Data System (ADS)

    Dannberg, Juliane; Heister, Timo

    2016-12-01

    Melt generation and migration are an important link between surface processes and the thermal and chemical evolution of the Earth's interior. However, their vastly different timescales make it difficult to study mantle convection and melt migration in a unified framework, especially for 3-D global models. And although experiments suggest an increase in melt volume of up to 20 per cent from the depth of melt generation to the surface, previous computations have neglected the individual compressibilities of the solid and the fluid phase. Here, we describe our extension of the finite element mantle convection code ASPECT that adds melt generation and migration. We use the original compressible formulation of the McKenzie equations, augmented by an equation for the conservation of energy. Applying adaptive mesh refinement to this type of problems is particularly advantageous, as the resolution can be increased in areas where melt is present and viscosity gradients are high, whereas a lower resolution is sufficient in regions without melt. Together with a high-performance, massively parallel implementation, this allows for high-resolution, 3-D, compressible, global mantle convection simulations coupled with melt migration. We evaluate the functionality and potential of this method using a series of benchmarks and model setups, compare results of the compressible and incompressible formulation, and show the effectiveness of adaptive mesh refinement when applied to melt migration. Our model of magma dynamics provides a framework for modelling processes on different scales and investigating links between processes occurring in the deep mantle and melt generation and migration. This approach could prove particularly useful applied to modelling the generation of komatiites or other melts originating in greater depths. The implementation is available in the Open Source ASPECT repository.

  17. Adaptive mesh fluid simulations on GPU

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Abel, Tom; Kaehler, Ralf

    2010-10-01

    We describe an implementation of compressible inviscid fluid solvers with block-structured adaptive mesh refinement on Graphics Processing Units using NVIDIA's CUDA. We show that a class of high resolution shock capturing schemes can be mapped naturally on this architecture. Using the method of lines approach with the second order total variation diminishing Runge-Kutta time integration scheme, piecewise linear reconstruction, and a Harten-Lax-van Leer Riemann solver, we achieve an overall speedup of approximately 10 times faster execution on one graphics card as compared to a single core on the host computer. We attain this speedup in uniform grid runs as well as in problems with deep AMR hierarchies. Our framework can readily be applied to more general systems of conservation laws and extended to higher order shock capturing schemes. This is shown directly by an implementation of a magneto-hydrodynamic solver and comparing its performance to the pure hydrodynamic case. Finally, we also combined our CUDA parallel scheme with MPI to make the code run on GPU clusters. Close to ideal speedup is observed on up to four GPUs.

  18. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  19. Adaptive mesh refinement for stochastic reaction-diffusion processes

    SciTech Connect

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2011-01-01

    We present an algorithm for adaptive mesh refinement applied to mesoscopic stochastic simulations of spatially evolving reaction-diffusion processes. The transition rates for the diffusion process are derived on adaptive, locally refined structured meshes. Convergence of the diffusion process is presented and the fluctuations of the stochastic process are verified. Furthermore, a refinement criterion is proposed for the evolution of the adaptive mesh. The method is validated in simulations of reaction-diffusion processes as described by the Fisher-Kolmogorov and Gray-Scott equations.

  20. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  1. Adaptive Meshing of Ship Air-Wake Flowfields

    DTIC Science & Technology

    2014-03-03

    this code are currently generated using Pointwise .[2] This code also uses a second order spatial finite-volume scheme with first order explicit...simulated with the two codes and is shown below. The surface mesh from the 3D mesh generated by Pointwise serves as the geometry for the OctFlow code. A...Geometries", AIAA- 2000-1006,2000. 2. " Pointwise ." Pointwise , Inc., http://www.pointwise.com. 3. O’Connell, M., and Karman, S., "Mesh Rupturing: A

  2. Large-Scale Parallel Unstructured Mesh Computations for 3D High-Lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries which arise in high-lift con gurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  3. Large-scale Parallel Unstructured Mesh Computations for 3D High-lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for the three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries that arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  4. Large-Scale Parallel Unstructured Mesh Computations for 3D High-Lift Analysis

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.; Pirzadeh, S.

    1999-01-01

    A complete "geometry to drag-polar" analysis capability for three-dimensional high-lift configurations is described. The approach is based on the use of unstructured meshes in order to enable rapid turnaround for complicated geometries which arise in high-lift configurations. Special attention is devoted to creating a capability for enabling analyses on highly resolved grids. Unstructured meshes of several million vertices are initially generated on a work-station, and subsequently refined on a supercomputer. The flow is solved on these refined meshes on large parallel computers using an unstructured agglomeration multigrid algorithm. Good prediction of lift and drag throughout the range of incidences is demonstrated on a transport take-off configuration using up to 24.7 million grid points. The feasibility of using this approach in a production environment on existing parallel machines is demonstrated, as well as the scalability of the solver on machines using up to 1450 processors.

  5. Serial and parallel dynamic adaptation of general hybrid meshes

    NASA Astrophysics Data System (ADS)

    Kavouklis, Christos

    The Navier-Stokes equations are a standard mathematical representation of viscous fluid flow. Their numerical solution in three dimensions remains a computationally intensive and challenging task, despite recent advances in computer speed and memory. A strategy to increase accuracy of Navier-Stokes simulations, while maintaining computing resources to a minimum, is local refinement of the associated computational mesh in regions of large solution gradients and coarsening in regions where the solution does not vary appreciably. In this work we consider adaptation of general hybrid meshes for Computational Fluid Dynamics (CFD) applications. Hybrid meshes are composed of four types of elements; hexahedra, prisms, pyramids and tetrahedra, and have been proven a promising technology in accurately resolving fluid flow for complex geometries. The first part of this dissertation is concerned with the design and implementation of a serial scheme for the adaptation of general three dimensional hybrid meshes. We have defined 29 refinement types, for all four kinds of elements. The core of the present adaptation scheme is an iterative algorithm that flags mesh edges for refinement, so that the adapted mesh is conformal. Of primary importance is considered the design of a suitable dynamic data structure that facilitates refinement and coarsening operations and furthermore minimizes memory requirements. A special dynamic list is defined for mesh elements, in contrast with the usual tree structures. It contains only elements of the current adaptation step and minimal information that is utilized to reconstruct parent elements when the mesh is coarsened. In the second part of this work, a new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid meshes is presented. Partitioning of a hybrid mesh reduces to partitioning of the corresponding dual graph. Communication among processors is based on the faces of the interpartition boundary. The distributed

  6. White Dwarf Mergers on Adaptive Meshes

    NASA Astrophysics Data System (ADS)

    Katz, Maximilian Peter

    The mergers of binary white dwarf systems are potential progenitors of astrophysical explosions such as Type Ia supernovae. These white dwarfs can merge either by orbital decay through the emission of gravitational waves or by direct collisions as a result of orbital perturbations. The coalescence of the stars may ignite nuclear fusion, resulting in the destruction of both stars through a thermonuclear runaway and ensuing detonation. The goal of this dissertation is to simulate binary white dwarf systems using the techniques of computational fluid dynamics and therefore to understand what numerical techniques are necessary to obtain accurate dynamical evolution of the system, as well as to learn what conditions are necessary to enable a realistic detonation. For this purpose I have used software that solves the relevant fluid equations, the Poisson equation for self-gravity, and the systems governing nuclear reactions between atomic species. These equations are modeled on a computational domain that uses the technique of adaptive mesh refinement to have the highest spatial resolution in the areas of the domain that are most sensitive to the need for accurate numerical evolution. I have identified that the most important obstacles to accurate evolution are the numerical violation of conservation of energy and angular momentum in the system, and the development of numerically seeded thermonuclear detonations that do not bear resemblance to physically correct detonations. I then developed methods for ameliorating these problems, and determined what metrics can be used for judging whether a given white dwarf merger simulation is trustworthy. This involved the development of a number of algorithmic improvements to the simulation software, which I describe. Finally, I performed high-resolution simulations of typical cases of white dwarf mergers and head-on collisions to demonstrate the impacts of these choices. The results of these simulations and the corresponding

  7. A two-dimensional adaptive mesh generation method

    NASA Astrophysics Data System (ADS)

    Altas, Irfan; Stephenson, John W.

    1991-05-01

    The present, two-dimensional adaptive mesh-generation method allows selective modification of a small portion of the mesh without affecting large areas of adjacent mesh-points, and is applicable with or without boundary-fitted coordinate-generation procedures. The cases of differential equation discretization by, on the one hand, classical difference formulas designed for uniform meshes, and on the other the present difference formulas, are illustrated through the application of the method to the Hiemenz flow for which the Navier-Stokes equation's exact solution is known, as well as to a two-dimensional viscous internal flow problem.

  8. Drag Prediction for the NASA CRM Wing-Body-Tail Using CFL3D and OVERFLOW on an Overset Mesh

    NASA Technical Reports Server (NTRS)

    Sclafani, Anthony J.; DeHaan, Mark A.; Vassberg, John C.; Rumsey, Christopher L.; Pulliam, Thomas H.

    2010-01-01

    In response to the fourth AIAA CFD Drag Prediction Workshop (DPW-IV), the NASA Common Research Model (CRM) wing-body and wing-body-tail configurations are analyzed using the Reynolds-averaged Navier-Stokes (RANS) flow solvers CFL3D and OVERFLOW. Two families of structured, overset grids are built for DPW-IV. Grid Family 1 (GF1) consists of a coarse (7.2 million), medium (16.9 million), fine (56.5 million), and extra-fine (189.4 million) mesh. Grid Family 2 (GF2) is an extension of the first and includes a superfine (714.2 million) and an ultra-fine (2.4 billion) mesh. The medium grid anchors both families with an established build process for accurate cruise drag prediction studies. This base mesh is coarsened and enhanced to form a set of parametrically equivalent grids that increase in size by a factor of roughly 3.4 from one level to the next denser level. Both CFL3D and OVERFLOW are run on GF1 using a consistent numerical approach. Additional OVERFLOW runs are made to study effects of differencing scheme and turbulence model on GF1 and to obtain results for GF2. All CFD results are post-processed using Richardson extrapolation, and approximate grid-converged values of drag are compared. The medium grid is also used to compute a trimmed drag polar for both codes.

  9. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  10. 3D active shape models of human brain structures: application to patient-specific mesh generation

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.

    2015-03-01

    The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.

  11. 3D cinema to 3DTV content adaptation

    NASA Astrophysics Data System (ADS)

    Yasakethu, L.; Blondé, L.; Doyen, D.; Huynh-Thu, Q.

    2012-03-01

    3D cinema and 3DTV have grown in popularity in recent years. Filmmakers have a significant opportunity in front of them given the recent success of 3D films. In this paper we investigate whether this opportunity could be extended to the home in a meaningful way. "3D" perceived from viewing stereoscopic content depends on the viewing geometry. This implies that the stereoscopic-3D content should be captured for a specific viewing geometry in order to provide a satisfactory 3D experience. However, although it would be possible, it is clearly not viable, to produce and transmit multiple streams of the same content for different screen sizes. In this study to solve the above problem, we analyze the performance of six different disparity-based transformation techniques, which could be used for cinema-to-3DTV content conversion. Subjective tests are performed to evaluate the effectiveness of the algorithms in terms of depth effect, visual comfort and overall 3D quality. The resultant 3DTV experience is also compared to that of cinema. We show that by applying the proper transformation technique on the content originally captured for cinema, it is possible to enhance the 3DTV experience. The selection of the appropriate transformation is highly dependent on the content characteristics.

  12. Adaptive Mesh and Algorithm Refinement Using Direct Simulation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Garcia, Alejandro L.; Bell, John B.; Crutchfield, William Y.; Alder, Berni J.

    1999-09-01

    Adaptive mesh and algorithm refinement (AMAR) embeds a particle method within a continuum method at the finest level of an adaptive mesh refinement (AMR) hierarchy. The coupling between the particle region and the overlaying continuum grid is algorithmically equivalent to that between the fine and coarse levels of AMR. Direct simulation Monte Carlo (DSMC) is used as the particle algorithm embedded within a Godunov-type compressible Navier-Stokes solver. Several examples are presented and compared with purely continuum calculations.

  13. Parallel adaptive mesh refinement for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1996-12-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradients with multigrid preconditioning. We have parallelized our solver using an object-oriented adaptive mesh refinement framework.

  14. Meshing Preprocessor for the Mesoscopic 3D Finite Element Simulation of 2D and Interlock Fabric Deformation

    NASA Astrophysics Data System (ADS)

    Wendling, A.; Daniel, J. L.; Hivet, G.; Vidal-Sallé, E.; Boisse, P.

    2015-12-01

    Numerical simulation is a powerful tool to predict the mechanical behavior and the feasibility of composite parts. Among the available numerical approaches, as far as woven reinforced composites are concerned, 3D finite element simulation at the mesoscopic scale leads to a good compromise between realism and complexity. At this scale, the fibrous reinforcement is modeled by an interlacement of yarns assumed to be homogeneous that have to be accurately represented. Among the numerous issues induced by these simulations, the first one consists in providing a representative meshed geometrical model of the unit cell at the mesoscopic scale. The second one consists in enabling a fast data input in the finite element software (contacts definition, boundary conditions, elements reorientation, etc.) so as to obtain results within reasonable time. Based on parameterized 3D CAD modeling tool of unit-cells of dry fabrics already developed, this paper presents an efficient strategy which permits an automated meshing of the models with 3D hexahedral elements and to accelerate of several orders of magnitude the simulation data input. Finally, the overall modeling strategy is illustrated by examples of finite element simulation of the mechanical behavior of fabrics.

  15. Compression of 3D Point Clouds Using a Region-Adaptive Hierarchical Transform.

    PubMed

    De Queiroz, Ricardo; Chou, Philip A

    2016-06-01

    In free-viewpoint video, there is a recent trend to represent scene objects as solids rather than using multiple depth maps. Point clouds have been used in computer graphics for a long time and with the recent possibility of real time capturing and rendering, point clouds have been favored over meshes in order to save computation. Each point in the cloud is associated with its 3D position and its color. We devise a method to compress the colors in point clouds which is based on a hierarchical transform and arithmetic coding. The transform is a hierarchical sub-band transform that resembles an adaptive variation of a Haar wavelet. The arithmetic encoding of the coefficients assumes Laplace distributions, one per sub-band. The Laplace parameter for each distribution is transmitted to the decoder using a custom method. The geometry of the point cloud is encoded using the well-established octtree scanning. Results show that the proposed solution performs comparably to the current state-of-the-art, in many occasions outperforming it, while being much more computationally efficient. We believe this work represents the state-of-the-art in intra-frame compression of point clouds for real-time 3D video.

  16. 3D reconstruction method from biplanar radiography using non-stereocorresponding points and elastic deformable meshes.

    PubMed

    Mitton, D; Landry, C; Véron, S; Skalli, W; Lavaste, F; De Guise, J A

    2000-03-01

    Standard 3D reconstruction of bones using stereoradiography is limited by the number of anatomical landmarks visible in more than one projection. The proposed technique enables the 3D reconstruction of additional landmarks that can be identified in only one of the radiographs. The principle of this method is the deformation of an elastic object that respects stereocorresponding and non-stereocorresponding observations available in different projections. This technique is based on the principle that any non-stereocorresponding point belongs to a line joining the X-ray source and the projection of the point in one view. The aim is to determine the 3D position of these points on their line of projection when submitted to geometrical and topological constraints. This technique is used to obtain the 3D geometry of 18 cadaveric upper cervical vertebrae. The reconstructed geometry obtained is compared with direct measurements using a magnetic digitiser. The order of precision determined with the point-to-surface distance between the reconstruction obtained with that technique and reference measurements is about 1 mm, depending on the vertebrae studied. Comparison results indicate that the obtained reconstruction is close to the actual vertebral geometry. This method can therefore be proposed to obtain the 3D geometry of vertebrae.

  17. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  18. 3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben

    2016-12-01

    This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.

  19. Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D

    SciTech Connect

    Pavel Kus; Pavel Solin; David Andrs

    2014-11-01

    In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.

  20. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  1. a Geometric Processing Workflow for Transforming Reality-Based 3d Models in Volumetric Meshes Suitable for Fea

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Guidi, G.

    2017-02-01

    Conservation of Cultural Heritage is a key issue and structural changes and damages can influence the mechanical behaviour of artefacts and buildings. The use of Finite Elements Methods (FEM) for mechanical analysis is largely used in modelling stress behaviour. The typical workflow involves the use of CAD 3D models made by Non-Uniform Rational B-splines (NURBS) surfaces, representing the ideal shape of the object to be simulated. Nowadays, 3D documentation of CH has been widely developed through reality-based approaches, but the models are not suitable for a direct use in FEA: the mesh has in fact to be converted to volumetric, and the density has to be reduced since the computational complexity of a FEA grows exponentially with the number of nodes. The focus of this paper is to present a new method aiming at generate the most accurate 3D representation of a real artefact from highly accurate 3D digital models derived from reality-based techniques, maintaining the accuracy of the high-resolution polygonal models in the solid ones. The approach proposed is based on a wise use of retopology procedures and a transformation of this model to a mathematical one made by NURBS surfaces suitable for being processed by volumetric meshers typically embedded in standard FEM packages. The strong simplification with little loss of consistency possible with the retopology step is used for maintaining as much coherence as possible between the original acquired mesh and the simplified model, creating in the meantime a topology that is more favourable for the automatic NURBS conversion.

  2. Parallel adaptive mesh refinement techniques for plasticity problems

    SciTech Connect

    Barry, W.J.; Jones, M.T. |; Plassmann, P.E.

    1997-12-31

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  3. Parallel adaptive mesh refinement techniques for plasticity problems

    NASA Technical Reports Server (NTRS)

    Barry, W. J.; Jones, M. T.; Plassmann, P. E.

    1997-01-01

    The accurate modeling of the nonlinear properties of materials can be computationally expensive. Parallel computing offers an attractive way for solving such problems; however, the efficient use of these systems requires the vertical integration of a number of very different software components, we explore the solution of two- and three-dimensional, small-strain plasticity problems. We consider a finite-element formulation of the problem with adaptive refinement of an unstructured mesh to accurately model plastic transition zones. We present a framework for the parallel implementation of such complex algorithms. This framework, using libraries from the SUMAA3d project, allows a user to build a parallel finite-element application without writing any parallel code. To demonstrate the effectiveness of this approach on widely varying parallel architectures, we present experimental results from an IBM SP parallel computer and an ATM-connected network of Sun UltraSparc workstations. The results detail the parallel performance of the computational phases of the application during the process while the material is incrementally loaded.

  4. Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  5. Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2

    NASA Technical Reports Server (NTRS)

    Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.

  6. Adaptive upscaling with the dual mesh method

    SciTech Connect

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  7. Parallel adaptation of general three-dimensional hybrid meshes

    SciTech Connect

    Kavouklis, Christos Kallinderis, Yannis

    2010-05-01

    A new parallel dynamic mesh adaptation and load balancing algorithm for general hybrid grids has been developed. The meshes considered in this work are composed of four kinds of elements; tetrahedra, prisms, hexahedra and pyramids, which poses a challenge to parallel mesh adaptation. Additional complexity imposed by the presence of multiple types of elements affects especially data migration, updates of local data structures and interpartition data structures. Efficient partition of hybrid meshes has been accomplished by transforming them to suitable graphs and using serial graph partitioning algorithms. Communication among processors is based on the faces of the interpartition boundary and the termination detection algorithm of Dijkstra is employed to ensure proper flagging of edges for refinement. An inexpensive dynamic load balancing strategy is introduced to redistribute work load among processors after adaptation. In particular, only the initial coarse mesh, with proper weighting, is balanced which yields savings in computation time and relatively simple implementation of mesh quality preservation rules, while facilitating coarsening of refined elements. Special algorithms are employed for (i) data migration and dynamic updates of the local data structures, (ii) determination of the resulting interpartition boundary and (iii) identification of the communication pattern of processors. Several representative applications are included to evaluate the method.

  8. 3-D adaptive grid Navier-Stokes rocket plume calculations

    NASA Astrophysics Data System (ADS)

    Holcomb, J. Eric

    1991-01-01

    Three-dimensional adaptive-grid full Navier-Stokes calculations performed for the base region and plume of the Minuteman first stage and a simplified version of the Titan first stage are used to demonstrate the applicability of the Navier-Stokes flow solver, EAGLE adaptive grid generator, and k-epsilon turbulence model to rocket plume flowfields. The calculations include realistic exhaust gas thermodynamic properties, with frozen chemistry.

  9. MHD simulations on an unstructured mesh

    SciTech Connect

    Strauss, H.R.; Park, W.; Belova, E.; Fu, G.Y.; Longcope, D.W.; Sugiyama, L.E.

    1998-12-31

    Two reasons for using an unstructured computational mesh are adaptivity, and alignment with arbitrarily shaped boundaries. Two codes which use finite element discretization on an unstructured mesh are described. FEM3D solves 2D and 3D RMHD using an adaptive grid. MH3D++, which incorporates methods of FEM3D into the MH3D generalized MHD code, can be used with shaped boundaries, which might be 3D.

  10. Registration of 3D point clouds and meshes: a survey from rigid to nonrigid.

    PubMed

    Tam, Gary K L; Cheng, Zhi-Quan; Lai, Yu-Kun; Langbein, Frank C; Liu, Yonghuai; Marshall, David; Martin, Ralph R; Sun, Xian-Fang; Rosin, Paul L

    2013-07-01

    Three-dimensional surface registration transforms multiple three-dimensional data sets into the same coordinate system so as to align overlapping components of these sets. Recent surveys have covered different aspects of either rigid or nonrigid registration, but seldom discuss them as a whole. Our study serves two purposes: 1) To give a comprehensive survey of both types of registration, focusing on three-dimensional point clouds and meshes and 2) to provide a better understanding of registration from the perspective of data fitting. Registration is closely related to data fitting in which it comprises three core interwoven components: model selection, correspondences and constraints, and optimization. Study of these components 1) provides a basis for comparison of the novelties of different techniques, 2) reveals the similarity of rigid and nonrigid registration in terms of problem representations, and 3) shows how overfitting arises in nonrigid registration and the reasons for increasing interest in intrinsic techniques. We further summarize some practical issues of registration which include initializations and evaluations, and discuss some of our own observations, insights and foreseeable research trends.

  11. A goal-oriented adaptive finite-element approach for plane wave 3-D electromagnetic modelling

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi

    2013-08-01

    We have developed a novel goal-oriented adaptive mesh refinement approach for finite-element methods to model plane wave electromagnetic (EM) fields in 3-D earth models based on the electric field differential equation. To handle complicated models of arbitrary conductivity, magnetic permeability and dielectric permittivity involving curved boundaries and surface topography, we employ an unstructured grid approach. The electric field is approximated by linear curl-conforming shape functions which guarantee the divergence-free condition of the electric field within each tetrahedron and continuity of the tangential component of the electric field across the interior boundaries. Based on the non-zero residuals of the approximated electric field and the yet to be satisfied boundary conditions of continuity of both the normal component of the total current density and the tangential component of the magnetic field strength across the interior interfaces, three a-posterior error estimators are proposed as a means to drive the goal-oriented adaptive refinement procedure. The first a-posterior error estimator relies on a combination of the residual of the electric field, the discontinuity of the normal component of the total current density and the discontinuity of the tangential component of the magnetic field strength across the interior faces shared by tetrahedra. The second a-posterior error estimator is expressed in terms of the discontinuity of the normal component of the total current density (conduction plus displacement current). The discontinuity of the tangential component of the magnetic field forms the third a-posterior error estimator. Analytical solutions for magnetotelluric (MT) and radiomagnetotelluric (RMT) fields impinging on a homogeneous half-space model are used to test the performances of the newly developed goal-oriented algorithms using the above three a-posterior error estimators. A trapezoidal topographical model, using normally incident EM waves

  12. Multigrid solution of internal flows using unstructured solution adaptive meshes

    NASA Technical Reports Server (NTRS)

    Smith, Wayne A.; Blake, Kenneth R.

    1992-01-01

    This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.

  13. Biologic response of inguinal hernia prosthetics: a comparative study of conventional static meshes versus 3D dynamic implants.

    PubMed

    Amato, Giuseppe; Romano, Giorgio; Agrusa, Antonino; Marasa, Salvatore; Cocorullo, Gianfranco; Gulotta, Gaspare; Goetze, Thorsten; Puleio, Roberto

    2015-01-01

    Despite improvements in prosthetics and surgical techniques, the rate of complications following inguinal hernia repair remains high. Among these, discomfort and chronic pain have become a source of increasing concern among surgeons. Poor quality of tissue ingrowth, such as thin scar plates or shrinking scars-typical results with conventional static implants and plugs-may contribute to these adverse events. Recently, a new type of 3D dynamically responsive implant was introduced to the market. This device, designed to be placed fixation-free, seems to induce ingrowth of viable and structured tissue instead of regressive fibrotic scarring. To elucidate the differences in biologic response between the conventional static meshes and this 3D dynamically responsive implant, a histological comparison was planned. The aim of this study was to determine the quality of tissue incorporation in both types of implants excised after short, medium, and long periods post-implantation. The results showed large differences in the biologic responses between the two implant types. Histologically, the 3D dynamic implant showed development of tissue elements more similar to natural abdominal wall structures, such as the ingrowth of loose and well-hydrated connective tissue, well-formed vascular structures, elastic fibers, and mature nerves, with negligible or absent inflammatory response. All these characteristics were completely absent in the conventional static implants, where a persistent inflammatory reaction was associated with thin, hardened, and shrunken fibrotic scar formation. Consequently, as herniation is a degenerative process, the 3D dynamic implants, which induce regeneration of the typical groin components, seem to address its pathogenesis.

  14. Parallelized 3D CSEM modeling using edge-based finite element with total field formulation and unstructured mesh

    NASA Astrophysics Data System (ADS)

    Cai, Hongzhu; Hu, Xiangyun; Li, Jianhui; Endo, Masashi; Xiong, Bin

    2017-02-01

    We solve the 3D controlled-source electromagnetic (CSEM) problem using the edge-based finite element method. The modeling domain is discretized using unstructured tetrahedral mesh. We adopt the total field formulation for the quasi-static variant of Maxwell's equation and the computation cost to calculate the primary field can be saved. We adopt a new boundary condition which approximate the total field on the boundary by the primary field corresponding to the layered earth approximation of the complicated conductivity model. The primary field on the modeling boundary is calculated using fast Hankel transform. By using this new type of boundary condition, the computation cost can be reduced significantly and the modeling accuracy can be improved. We consider that the conductivity can be anisotropic. We solve the finite element system of equations using a parallelized multifrontal solver which works efficiently for multiple source and large scale electromagnetic modeling.

  15. 3D Moving-Mesh Simulations of Galactic Center Cloud G2

    NASA Astrophysics Data System (ADS)

    Wilson, Julia; Fragile, P. C.; Anninos, P.; Murray, S. D.

    2013-01-01

    Using three-dimensional, moving-mesh simulations, we investigate the future evolution of the recently discovered gas cloud G2 traveling through the galactic center. We consider the case of a spherical cloud initially in pressure equilibrium with the background. Our suite of simulations explores the following parameters: the equation of state, radial profiles of the background gas, and start times for the evolution. Our primary focus is on how the fate of this cloud will affect the future activity of Sgr A*. From our simulations we expect an average feeding rate in the range of 5 - 19 × 10-8M⊙ yr-1 beginning in 2013 and lasting for at least 7 years (our simulations stop in year 2020). The accretion varies by less than a factor of three on timescales ≤ 1 month, and shows no more than a factor of 10 difference between the maximum and minimum observed rates within any given model. These rates are comparable to the current estimated accretion rate in the immediate vicinity of Sgr A*, although they represent only a small (≤ 5%) increase over the current expected feeding rate at the effective inner boundary of our simulations (r = 750RS ≈ 1015 cm), where RS is the Schwarzschild radius of the black hole. Therefore, the break up of cloud G2 may have only a minimal effect on the brightness and variability of Sgr A* over the next decade. This is because current models of the galactic center predict that most of the gas will be caught up in outflows. However, if the accreted G2 material can remain cold, it may not mix well with the hot, diffuse background gas, and instead accrete efficiently onto Sgr A*. Further observations of G2 will give us an unprecedented opportunity to test this idea. The break up of the cloud itself may also be observable. By tracking the amount of cloud energy that is dissipated during our simulations, we are able to get a rough estimate of the luminosity associated with its tidal disruption; we find values of a few 1036 erg s-1.

  16. 3-D Adaptive Sparsity Based Image Compression with Applications to Optical Coherence Tomography

    PubMed Central

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A.; Farsiu, Sina

    2015-01-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  17. Adaptive mesh refinement for shocks and material interfaces

    SciTech Connect

    Dai, William Wenlong

    2010-01-01

    There are three kinds of adaptive mesh refinement (AMR) in structured meshes. Block-based AMR sometimes over refines meshes. Cell-based AMR treats cells cell by cell and thus loses the advantage of the nature of structured meshes. Patch-based AMR is intended to combine advantages of block- and cell-based AMR, i.e., the nature of structured meshes and sharp regions of refinement. But, patch-based AMR has its own difficulties. For example, patch-based AMR typically cannot preserve symmetries of physics problems. In this paper, we will present an approach for a patch-based AMR for hydrodynamics simulations. The approach consists of clustering, symmetry preserving, mesh continuity, flux correction, communications, management of patches, and load balance. The special features of this patch-based AMR include symmetry preserving, efficiency of refinement across shock fronts and material interfaces, special implementation of flux correction, and patch management in parallel computing environments. To demonstrate the capability of the AMR framework, we will show both two- and three-dimensional hydrodynamics simulations with many levels of refinement.

  18. Optimal 3D Viewing with Adaptive Stereo Displays for Advanced Telemanipulation

    NASA Technical Reports Server (NTRS)

    Lee, S.; Lakshmanan, S.; Ro, S.; Park, J.; Lee, C.

    1996-01-01

    A method of optimal 3D viewing based on adaptive displays of stereo images is presented for advanced telemanipulation. The method provides the viewer with the capability of accurately observing a virtual 3D object or local scene of his/her choice with minimum distortion.

  19. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  20. Advanced 3D mesh manipulation in stereolithographic files and post-print processing for the manufacturing of patient-specific vascular flow phantoms

    NASA Astrophysics Data System (ADS)

    O'Hara, Ryan P.; Chand, Arpita; Vidiyala, Sowmya; Arechavala, Stacie M.; Mitsouras, Dimitrios; Rudin, Stephen; Ionita, Ciprian N.

    2016-03-01

    Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments, test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry. For post-print surface optimization, we investigated various solutions and methods to remove support material and smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh manipulation methods decrease the phantom design time by 30 - 80% and allow for rapid development of accurate vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods presented in this work could lead to shorter design time for patient specific phantoms and better physiological simulations.

  1. PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.

  2. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  3. Adaptive mesh strategies for the spectral element method

    NASA Technical Reports Server (NTRS)

    Mavriplis, Catherine

    1992-01-01

    An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.

  4. Parallel Block Structured Adaptive Mesh Refinement on Graphics Processing Units

    SciTech Connect

    Beckingsale, D. A.; Gaudin, W. P.; Hornung, R. D.; Gunney, B. T.; Gamblin, T.; Herdman, J. A.; Jarvis, S. A.

    2014-11-17

    Block-structured adaptive mesh refinement is a technique that can be used when solving partial differential equations to reduce the number of zones necessary to achieve the required accuracy in areas of interest. These areas (shock fronts, material interfaces, etc.) are recursively covered with finer mesh patches that are grouped into a hierarchy of refinement levels. Despite the potential for large savings in computational requirements and memory usage without a corresponding reduction in accuracy, AMR adds overhead in managing the mesh hierarchy, adding complex communication and data movement requirements to a simulation. In this paper, we describe the design and implementation of a native GPU-based AMR library, including: the classes used to manage data on a mesh patch, the routines used for transferring data between GPUs on different nodes, and the data-parallel operators developed to coarsen and refine mesh data. We validate the performance and accuracy of our implementation using three test problems and two architectures: an eight-node cluster, and over four thousand nodes of Oak Ridge National Laboratory’s Titan supercomputer. Our GPU-based AMR hydrodynamics code performs up to 4.87× faster than the CPU-based implementation, and has been scaled to over four thousand GPUs using a combination of MPI and CUDA.

  5. GENSURF: A mesh generator for 3D finite element analysis of surface and corner cracks in finite thickness plates subjected to mode-1 loadings

    NASA Technical Reports Server (NTRS)

    Raju, I. S.

    1992-01-01

    A computer program that generates three-dimensional (3D) finite element models for cracked 3D solids was written. This computer program, gensurf, uses minimal input data to generate 3D finite element models for isotropic solids with elliptic or part-elliptic cracks. These models can be used with a 3D finite element program called surf3d. This report documents this mesh generator. In this manual the capabilities, limitations, and organization of gensurf are described. The procedures used to develop 3D finite element models and the input for and the output of gensurf are explained. Several examples are included to illustrate the use of this program. Several input data files are included with this manual so that the users can edit these files to conform to their crack configuration and use them with gensurf.

  6. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  7. Electro-bending characterization of adaptive 3D fiber reinforced plastics based on shape memory alloys

    NASA Astrophysics Data System (ADS)

    Ashir, Moniruddoza; Hahn, Lars; Kluge, Axel; Nocke, Andreas; Cherif, Chokri

    2016-03-01

    The industrial importance of fiber reinforced plastics (FRPs) is growing steadily in recent years, which are mostly used in different niche products, has been growing steadily in recent years. The integration of sensors and actuators in FRP is potentially valuable for creating innovative applications and therefore the market acceptance of adaptive FRP is increasing. In particular, in the field of highly stressed FRP, structural integrated systems for continuous component parts monitoring play an important role. This presented work focuses on the electro-mechanical characterization of adaptive three-dimensional (3D)FRP with integrated textile-based actuators. Here, the friction spun hybrid yarn, consisting of shape memory alloy (SMA) in wire form as core, serves as an actuator. Because of the shape memory effect, the SMA-hybrid yarn returns to its original shape upon heating that also causes the deformation of adaptive 3D FRP. In order to investigate the influences of the deformation behavior of the adaptive 3D FRP, investigations in this research are varied according to the structural parameters such as radius of curvature of the adaptive 3D FRP, fabric types and number of layers of the fabric in the composite. Results show that reproducible deformations can be realized with adaptive 3D FRP and that structural parameters have a significant impact on the deformation capability.

  8. Model-Based Nonrigid Motion Analysis Using Natural Feature Adaptive Mesh

    SciTech Connect

    Zhang, Y.; Goldgof, D.B.; Sarkar, S.; Tsap, L.V.

    2000-04-25

    The success of nonrigid motion analysis using physical finite element model is dependent on the mesh that characterizes the object's geometric structure. We suggest a deformable mesh adapted to the natural features of images. The adaptive mesh requires much fewer number of nodes than the fixed mesh which was used in our previous work. We demonstrate the higher efficiency of the adaptive mesh in the context of estimating burn scar elasticity relative to normal skin elasticity using the observed 2D image sequence. Our results show that the scar assessment method based on the physical model using natural feature adaptive mesh can be applied to images which do not have artificial markers.

  9. Adaptive 3D single-block grids for the computation of viscous flows around wings

    SciTech Connect

    Hagmeijer, R.; Kok, J.C.

    1996-12-31

    A robust algorithm for the adaption of a 3D single-block structured grid suitable for the computation of viscous flows around a wing is presented and demonstrated by application to the ONERA M6 wing. The effects of grid adaption on the flow solution and accuracy improvements is analyzed. Reynolds number variations are studied.

  10. AN ADAPTIVE PARTICLE-MESH GRAVITY SOLVER FOR ENZO

    SciTech Connect

    Passy, Jean-Claude; Bryan, Greg L.

    2014-11-01

    We describe and implement an adaptive particle-mesh algorithm to solve the Poisson equation for grid-based hydrodynamics codes with nested grids. The algorithm is implemented and extensively tested within the astrophysical code Enzo against the multigrid solver available by default. We find that while both algorithms show similar accuracy for smooth mass distributions, the adaptive particle-mesh algorithm is more accurate for the case of point masses, and is generally less noisy. We also demonstrate that the two-body problem can be solved accurately in a configuration with nested grids. In addition, we discuss the effect of subcycling, and demonstrate that evolving all the levels with the same timestep yields even greater precision.

  11. Reconstruction of defects of maxillary sinus wall after removal of a huge odontogenic lesion using prebended 3D titanium-mesh and CAD/CAM technique

    PubMed Central

    2011-01-01

    A 63 year-old male with a huge odontogenic lesion of sinus maxillaris was treated with computer-assisted surgery. After resection of the odontogenic lesion, the sinus wall was reconstructed with a prebended 3D titanium-mesh using CAD/CAM technique. This work provides a new treatment device for maxillary reconstruction via rapid prototyping procedures. PMID:22070833

  12. Boltzmann Solver with Adaptive Mesh in Velocity Space

    SciTech Connect

    Kolobov, Vladimir I.; Arslanbekov, Robert R.; Frolova, Anna A.

    2011-05-20

    We describe the implementation of direct Boltzmann solver with Adaptive Mesh in Velocity Space (AMVS) using quad/octree data structure. The benefits of the AMVS technique are demonstrated for the charged particle transport in weakly ionized plasmas where the collision integral is linear. We also describe the implementation of AMVS for the nonlinear Boltzmann collision integral. Test computations demonstrate both advantages and deficiencies of the current method for calculations of narrow-kernel distributions.

  13. AMR++: Object-Oriented Parallel Adaptive Mesh Refinement

    SciTech Connect

    Quinlan, D.; Philip, B.

    2000-02-02

    Adaptive mesh refinement (AMR) computations are complicated by their dynamic nature. The development of solvers for realistic applications is complicated by both the complexity of the AMR and the geometry of realistic problem domains. The additional complexity of distributed memory parallelism within such AMR applications most commonly exceeds the level of complexity that can be reasonable maintained with traditional approaches toward software development. This paper will present the details of our object-oriented work on the simplification of the use of adaptive mesh refinement on applications with complex geometries for both serial and distributed memory parallel computation. We will present an independent set of object-oriented abstractions (C++ libraries) well suited to the development of such seemingly intractable scientific computations. As an example of the use of this object-oriented approach we will present recent results of an application modeling fluid flow in the eye. Within this example, the geometry is too complicated for a single curvilinear coordinate grid and so a set of overlapping curvilinear coordinate grids' are used. Adaptive mesh refinement and the required grid generation work to support the refinement process is coupled together in the solution of essentially elliptic equations within this domain. This paper will focus on the management of complexity within development of the AMR++ library which forms a part of the Overture object-oriented framework for the solution of partial differential equations within scientific computing.

  14. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    DOE PAGES

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less

  15. Advances in Patch-Based Adaptive Mesh Refinement Scalability

    SciTech Connect

    Gunney, Brian T.N.; Anderson, Robert W.

    2015-12-18

    Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extension of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.

  16. 3D motion adapted gating (3D MAG): a new navigator technique for accelerated acquisition of free breathing navigator gated 3D coronary MR-angiography.

    PubMed

    Hackenbroch, M; Nehrke, K; Gieseke, J; Meyer, C; Tiemann, K; Litt, H; Dewald, O; Naehle, C P; Schild, H; Sommer, T

    2005-08-01

    This study aimed to evaluate the influence of a new navigator technique (3D MAG) on navigator efficiency, total acquisition time, image quality and diagnostic accuracy. Fifty-six patients with suspected coronary artery disease underwent free breathing navigator gated coronary MRA (Intera, Philips Medical Systems, 1.5 T, spatial resolution 0.9x0.9x3 mm3) with and without 3D MAG. Evaluation of both sequences included: 1) navigator scan efficiency, 2) total acquisition time, 3) assessment of image quality and 4) detection of stenoses >50%. Average navigator efficiencies of the LCA and RCA were 43+/-12% and 42+/-12% with and 36+/-16% and 35+/-16% without 3D MAG (P<0.01). Scan time was reduced from 12 min 7 s without to 8 min 55 s with 3D MAG for the LCA and from 12 min 19 s to 9 min 7 s with 3D MAG for the RCA (P<0.01). The average scores of image quality of the coronary MRAs with and without 3D MAG were 3.5+/-0.79 and 3.46+/-0.84 (P>0.05). There was no significant difference in the sensitivity and specificity in the detection of coronary artery stenoses between coronary MRAs with and without 3D MAG (P>0.05). 3D MAG provides accelerated acquisition of navigator gated coronary MRA by about 19% while maintaining image quality and diagnostic accuracy.

  17. Binary 3D image interpolation algorithm based global information and adaptive curves fitting

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng

    2013-08-01

    Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.

  18. Block-structured adaptive mesh refinement - theory, implementation and application

    SciTech Connect

    Deiterding, Ralf

    2011-01-01

    Structured adaptive mesh refinement (SAMR) techniques can enable cutting-edge simulations of problems governed by conservation laws. Focusing on the strictly hyperbolic case, these notes explain all algorithmic and mathematical details of a technically relevant implementation tailored for distributed memory computers. An overview of the background of commonly used finite volume discretizations for gas dynamics is included and typical benchmarks to quantify accuracy and performance of the dynamically adaptive code are discussed. Large-scale simulations of shock-induced realistic combustion in non-Cartesian geometry and shock-driven fluid-structure interaction with fully coupled dynamic boundary motion demonstrate the applicability of the discussed techniques for complex scenarios.

  19. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  20. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  1. Elliptic Solvers with Adaptive Mesh Refinement on Complex Geometries

    SciTech Connect

    Phillip, B.

    2000-07-24

    Adaptive Mesh Refinement (AMR) is a numerical technique for locally tailoring the resolution computational grids. Multilevel algorithms for solving elliptic problems on adaptive grids include the Fast Adaptive Composite grid method (FAC) and its parallel variants (AFAC and AFACx). Theory that confirms the independence of the convergence rates of FAC and AFAC on the number of refinement levels exists under certain ellipticity and approximation property conditions. Similar theory needs to be developed for AFACx. The effectiveness of multigrid-based elliptic solvers such as FAC, AFAC, and AFACx on adaptively refined overlapping grids is not clearly understood. Finally, a non-trivial eye model problem will be solved by combining the power of using overlapping grids for complex moving geometries, AMR, and multilevel elliptic solvers.

  2. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  3. A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method

    NASA Astrophysics Data System (ADS)

    Bush, I. J.; Todorov, I. T.; Smith, W.

    2006-09-01

    The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.

  4. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  5. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  6. Free Tools and Strategies for the Generation of 3D Finite Element Meshes: Modeling of the Cardiac Structures

    PubMed Central

    Pavarino, E.; Neves, L. A.; Machado, J. M.; de Godoy, M. F.; Shiyou, Y.; Momente, J. C.; Zafalon, G. F. D.; Pinto, A. R.; Valêncio, C. R.

    2013-01-01

    The Finite Element Method is a well-known technique, being extensively applied in different areas. Studies using the Finite Element Method (FEM) are targeted to improve cardiac ablation procedures. For such simulations, the finite element meshes should consider the size and histological features of the target structures. However, it is possible to verify that some methods or tools used to generate meshes of human body structures are still limited, due to nondetailed models, nontrivial preprocessing, or mainly limitation in the use condition. In this paper, alternatives are demonstrated to solid modeling and automatic generation of highly refined tetrahedral meshes, with quality compatible with other studies focused on mesh generation. The innovations presented here are strategies to integrate Open Source Software (OSS). The chosen techniques and strategies are presented and discussed, considering cardiac structures as a first application context. PMID:23762031

  7. Dynamic Load Balancing for Adaptive Meshes using Symmetric Broadcast Networks

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Saini, Subhash (Technical Monitor)

    1998-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often dynamic in the sense that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing inter-processor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view across processors. In this paper, we compare a novel load balancer that utilizes symmetric broadcast networks (SBN) to a successful global load balancing environment (PLUM) created to handle adaptive unstructured applications. Our experimental results on the IBM SP2 demonstrate that performance of the proposed SBN load balancer is comparable to results achieved under PLUM.

  8. Parallel Processing of Adaptive Meshes with Load Balancing

    NASA Technical Reports Server (NTRS)

    Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Many scientific applications involve grids that lack a uniform underlying structure. These applications are often also dynamic in nature in that the grid structure significantly changes between successive phases of execution. In parallel computing environments, mesh adaptation of unstructured grids through selective refinement/coarsening has proven to be an effective approach. However, achieving load balance while minimizing interprocessor communication and redistribution costs is a difficult problem. Traditional dynamic load balancers are mostly inadequate because they lack a global view of system loads across processors. In this paper, we propose a novel and general-purpose load balancer that utilizes symmetric broadcast networks (SBN) as the underlying communication topology, and compare its performance with a successful global load balancing environment, called PLUM, specifically created to handle adaptive unstructured applications. Our experimental results on an IBM SP2 demonstrate that the SBN-based load balancer achieves lower redistribution costs than that under PLUM by overlapping processing and data migration.

  9. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  10. An adaptive mesh-moving and refinement procedure for one-dimensional conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Flaherty, Joseph E.; Arney, David C.

    1993-01-01

    We examine the performance of an adaptive mesh-moving and /or local mesh refinement procedure for the finite difference solution of one-dimensional hyperbolic systems of conservation laws. Adaptive motion of a base mesh is designed to isolate spatially distinct phenomena, and recursive local refinement of the time step and cells of the stationary or moving base mesh is performed in regions where a refinement indicator exceeds a prescribed tolerance. These adaptive procedures are incorporated into a computer code that includes a MacCormack finite difference scheme wih Davis' artificial viscosity model and a discretization error estimate based on Richardson's extrapolation. Experiments are conducted on three problems in order to qualify the advantages of adaptive techniques relative to uniform mesh computations and the relative benefits of mesh moving and refinement. Key results indicate that local mesh refinement, with and without mesh moving, can provide reliable solutions at much lower computational cost than possible on uniform meshes; that mesh motion can be used to improve the results of uniform mesh solutions for a modest computational effort; that the cost of managing the tree data structure associated with refinement is small; and that a combination of mesh motion and refinement reliably produces solutions for the least cost per unit accuracy.

  11. Visualization of Octree Adaptive Mesh Refinement (AMR) in Astrophysical Simulations

    NASA Astrophysics Data System (ADS)

    Labadens, M.; Chapon, D.; Pomaréde, D.; Teyssier, R.

    2012-09-01

    Computer simulations are important in current cosmological research. Those simulations run in parallel on thousands of processors, and produce huge amount of data. Adaptive mesh refinement is used to reduce the computing cost while keeping good numerical accuracy in regions of interest. RAMSES is a cosmological code developed by the Commissariat à l'énergie atomique et aux énergies alternatives (English: Atomic Energy and Alternative Energies Commission) which uses Octree adaptive mesh refinement. Compared to grid based AMR, the Octree AMR has the advantage to fit very precisely the adaptive resolution of the grid to the local problem complexity. However, this specific octree data type need some specific software to be visualized, as generic visualization tools works on Cartesian grid data type. This is why the PYMSES software has been also developed by our team. It relies on the python scripting language to ensure a modular and easy access to explore those specific data. In order to take advantage of the High Performance Computer which runs the RAMSES simulation, it also uses MPI and multiprocessing to run some parallel code. We would like to present with more details our PYMSES software with some performance benchmarks. PYMSES has currently two visualization techniques which work directly on the AMR. The first one is a splatting technique, and the second one is a custom ray tracing technique. Both have their own advantages and drawbacks. We have also compared two parallel programming techniques with the python multiprocessing library versus the use of MPI run. The load balancing strategy has to be smartly defined in order to achieve a good speed up in our computation. Results obtained with this software are illustrated in the context of a massive, 9000-processor parallel simulation of a Milky Way-like galaxy.

  12. A novel adaptive 3D medical image interpolation method based on shape

    NASA Astrophysics Data System (ADS)

    Chen, Jiaxin; Ma, Wei

    2013-03-01

    Image interpolation of cross-sections is one of the key steps of medical visualization. Aiming at the problem of fuzzy boundaries and large amount of calculation, which are brought by the traditional interpolation, a novel adaptive 3-D medical image interpolation method is proposed in this paper. Firstly, the contour is obtained by the edge interpolation, and the corresponding points are found according to the relation of the contour and points on the original images. Secondly, this algorithm utilizes volume relativity to get the best point-pair with the adaptive methods. Finally, the grey value of interpolation pixel is got by the matching point interpolation. The experimental results show that the method presented in the paper not only can meet the requirements of interpolation accuracy, but also can be used effectively in medical image 3D reconstruction.

  13. Adaptive mesh refinement and adjoint methods in geophysics simulations

    NASA Astrophysics Data System (ADS)

    Burstedde, Carsten

    2013-04-01

    It is an ongoing challenge to increase the resolution that can be achieved by numerical geophysics simulations. This applies to considering sub-kilometer mesh spacings in global-scale mantle convection simulations as well as to using frequencies up to 1 Hz in seismic wave propagation simulations. One central issue is the numerical cost, since for three-dimensional space discretizations, possibly combined with time stepping schemes, a doubling of resolution can lead to an increase in storage requirements and run time by factors between 8 and 16. A related challenge lies in the fact that an increase in resolution also increases the dimensionality of the model space that is needed to fully parametrize the physical properties of the simulated object (a.k.a. earth). Systems that exhibit a multiscale structure in space are candidates for employing adaptive mesh refinement, which varies the resolution locally. An example that we found well suited is the mantle, where plate boundaries and fault zones require a resolution on the km scale, while deeper area can be treated with 50 or 100 km mesh spacings. This approach effectively reduces the number of computational variables by several orders of magnitude. While in this case it is possible to derive the local adaptation pattern from known physical parameters, it is often unclear what are the most suitable criteria for adaptation. We will present the goal-oriented error estimation procedure, where such criteria are derived from an objective functional that represents the observables to be computed most accurately. Even though this approach is well studied, it is rarely used in the geophysics community. A related strategy to make finer resolution manageable is to design methods that automate the inference of model parameters. Tweaking more than a handful of numbers and judging the quality of the simulation by adhoc comparisons to known facts and observations is a tedious task and fundamentally limited by the turnaround times

  14. A 3D agglomeration multigrid solver for the Reynolds-averaged Navier-Stokes equations on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Marvriplis, D. J.; Venkatakrishnan, V.

    1995-01-01

    An agglomeration multigrid strategy is developed and implemented for the solution of three-dimensional steady viscous flows. The method enables convergence acceleration with minimal additional memory overheads, and is completely automated, in that it can deal with grids of arbitrary construction. The multigrid technique is validated by comparing the delivered convergence rates with those obtained by a previously developed overset-mesh multigrid approach, and by demonstrating grid independent convergence rates for aerodynamic problems on very large grids. Prospects for further increases in multigrid efficiency for high-Reynolds number viscous flows on highly stretched meshes are discussed.

  15. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.

    PubMed

    Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.

  16. Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation

    PubMed Central

    Dione, Ibrahima; Briffard, Thomas; Doyon, Nicolas

    2016-01-01

    In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer. PMID:27548674

  17. Adaptive clutter rejection for 3D color Doppler imaging: preliminary clinical study.

    PubMed

    Yoo, Yang Mo; Sikdar, Siddhartha; Karadayi, Kerem; Kolokythas, Orpheus; Kim, Yongmin

    2008-08-01

    In three-dimensional (3D) ultrasound color Doppler imaging (CDI), effective rejection of flash artifacts caused by tissue motion (clutter) is important for improving sensitivity in visualizing blood flow in vessels. Since clutter characteristics can vary significantly during volume acquisition, a clutter rejection technique that can adapt to the underlying clutter conditions is desirable for 3D CDI. We have previously developed an adaptive clutter rejection (ACR) method, in which an optimum filter is dynamically selected from a set of predesigned clutter filters based on the measured clutter characteristics. In this article, we evaluated the ACR method with 3D in vivo data acquired from 37 kidney transplant patients clinically indicated for a duplex ultrasound examination. We compared ACR against a conventional clutter rejection method, down-mixing (DM), using a commonly-used flow signal-to-clutter ratio (SCR) and a new metric called fractional residual clutter area (FRCA). The ACR method was more effective in removing the flash artifacts while providing higher sensitivity in detecting blood flow in the arcuate arteries and veins in the parenchyma of transplanted kidneys. ACR provided 3.4 dB improvement in SCR over the DM method (11.4 +/- 1.6 dB versus 8.0 +/- 2.0 dB, p < 0.001) and had lower average FRCA values compared with the DM method (0.006 +/- 0.003 versus 0.036 +/- 0.022, p < 0.001) for all study subjects. These results indicate that the new ACR method is useful for removing nonstationary tissue motion while improving the image quality for visualizing 3D vascular structure in 3D CDI.

  18. Production-quality Tools for Adaptive Mesh RefinementVisualization

    SciTech Connect

    Weber, Gunther H.; Childs, Hank; Bonnell, Kathleen; Meredith,Jeremy; Miller, Mark; Whitlock, Brad; Bethel, E. Wes

    2007-10-25

    Adaptive Mesh Refinement (AMR) is a highly effectivesimulation method for spanning a large range of spatiotemporal scales,such as astrophysical simulations that must accommodate ranges frominterstellar to sub-planetary. Most mainstream visualization tools stilllack support for AMR as a first class data type and AMR code teams usecustom built applications for AMR visualization. The Department ofEnergy's (DOE's) Science Discovery through Advanced Computing (SciDAC)Visualization and Analytics Center for Enabling Technologies (VACET) isextending and deploying VisIt, an open source visualization tool thataccommodates AMR as a first-class data type, for use asproduction-quality, parallel-capable AMR visual data analysisinfrastructure. This effort will help science teams that use AMR-basedsimulations and who develop their own AMR visual data analysis softwareto realize cost and labor savings.

  19. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  20. 3D dynamic rupture with anelastic wave propagation using an hp-adaptive Discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Tago, J.; Cruz-Atienza, V. M.; Etienne, V.; Virieux, J.; Benjemaa, M.; Sanchez-Sesma, F. J.

    2010-12-01

    Simulating any realistic seismic scenario requires incorporating physical basis into the model. Considering both the dynamics of the rupture process and the anelastic attenuation of seismic waves is essential to this purpose and, therefore, we choose to extend the hp-adaptive Discontinuous Galerkin finite-element method to integrate these physical aspects. The 3D elastodynamic equations in an unstructured tetrahedral mesh are solved with a second-order time marching approach in a high-performance computing environment. The first extension incorporates the viscoelastic rheology so that the intrinsic attenuation of the medium is considered in terms of frequency dependent quality factors (Q). On the other hand, the extension related to dynamic rupture is integrated through explicit boundary conditions over the crack surface. For this visco-elastodynamic formulation, we introduce an original discrete scheme that preserves the optimal code performance of the elastodynamic equations. A set of relaxation mechanisms describes the behavior of a generalized Maxwell body. We approximate almost constant Q in a wide frequency range by selecting both suitable relaxation frequencies and anelastic coefficients characterizing these mechanisms. In order to do so, we solve an optimization problem which is critical to minimize the amount of relaxation mechanisms. Two strategies are explored: 1) a least squares method and 2) a genetic algorithm (GA). We found that the improvement provided by the heuristic GA method is negligible. Both optimization strategies yield Q values within the 5% of the target constant Q mechanism. Anelastic functions (i.e. memory variables) are introduced to efficiently evaluate the time convolution terms involved in the constitutive equations and thus to minimize the computational cost. The incorporation of anelastic functions implies new terms with ordinary differential equations in the mathematical formulation. We solve these equations using the same order

  1. Novel adaptation of the demodulation technology for gear damage detection to variable amplitudes of mesh harmonics

    NASA Astrophysics Data System (ADS)

    Combet, F.; Gelman, L.

    2011-04-01

    In this paper, a novel adaptive demodulation technique including a new diagnostic feature is proposed for gear diagnosis in conditions of variable amplitudes of the mesh harmonics. This vibration technique employs the time synchronous average (TSA) of vibration signals. The new adaptive diagnostic feature is defined as the ratio of the sum of the sideband components of the envelope spectrum of a mesh harmonic to the measured power of the mesh harmonic. The proposed adaptation of the technique is justified theoretically and experimentally by the high level of the positive covariance between amplitudes of the mesh harmonics and the sidebands in conditions of variable amplitudes of the mesh harmonics. It is shown that the adaptive demodulation technique preserves effectiveness of local fault detection of gears operating in conditions of variable mesh amplitudes.

  2. Unstructured and adaptive mesh generation for high Reynolds number viscous flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1991-01-01

    A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.

  3. Adaptive local grid refinement for the compressible 3-D Euler equations

    NASA Astrophysics Data System (ADS)

    Schoenfeld, Thilo

    A method is presented based on a three-dimensional Euler code, using the explicit finite volume technique and a Runge-Kutta scheme, and applied in an adaptive version for the transonic flow around wings. The method allows embedded subgrids at two levels of refinement. Computations are performed with both various fixed refined grids and in an adaptive version applying a pressure or density gradient sensor. When comparing the results of embedded grid computations with calculations on only a total coarse or fine mesh, it can be stated that the local grid refinement technique is an effective framework to obtain well-resolved solutions with, at the same time, a minimum of grid points.

  4. Adaptive FEM with coarse initial mesh guarantees optimal convergence rates for compactly perturbed elliptic problems

    NASA Astrophysics Data System (ADS)

    Bespalov, Alex; Haberl, Alexander; Praetorius, Dirk

    2017-04-01

    We prove that for compactly perturbed elliptic problems, where the corresponding bilinear form satisfies a Garding inequality, adaptive mesh-refinement is capable of overcoming the preasymptotic behavior and eventually leads to convergence with optimal algebraic rates. As an important consequence of our analysis, one does not have to deal with the a-priori assumption that the underlying meshes are sufficiently fine. Hence, the overall conclusion of our results is that adaptivity has stabilizing effects and can overcome possibly pessimistic restrictions on the meshes. In particular, our analysis covers adaptive mesh-refinement for the finite element discretization of the Helmholtz equation from where our interest originated.

  5. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  6. FUN3D Grid Refinement and Adaptation Studies for the Ares Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Bartels, Robert E.; Vasta, Veer; Carlson, Jan-Renee; Park, Mike; Mineck, Raymond E.

    2010-01-01

    This paper presents grid refinement and adaptation studies performed in conjunction with computational aeroelastic analyses of the Ares crew launch vehicle (CLV). The unstructured grids used in this analysis were created with GridTool and VGRID while the adaptation was performed using the Computational Fluid Dynamic (CFD) code FUN3D with a feature based adaptation software tool. GridTool was developed by ViGYAN, Inc. while the last three software suites were developed by NASA Langley Research Center. The feature based adaptation software used here operates by aligning control volumes with shock and Mach line structures and by refining/de-refining where necessary. It does not redistribute node points on the surface. This paper assesses the sensitivity of the complex flow field about a launch vehicle to grid refinement. It also assesses the potential of feature based grid adaptation to improve the accuracy of CFD analysis for a complex launch vehicle configuration. The feature based adaptation shows the potential to improve the resolution of shocks and shear layers. Further development of the capability to adapt the boundary layer and surface grids of a tetrahedral grid is required for significant improvements in modeling the flow field.

  7. 3-D diffusion tensor MRI anisotropy content-adaptive finite element head model generation for bioelectromagnetic imaging.

    PubMed

    Lee, W H; Kim, T S; Kim, Andrew T; Lee, S Y

    2008-01-01

    Realistic finite element (FE) head models have been successfully applied to bioelectromagnetic problems due to a realistic representation of arbitrary head geometry with inclusion of anisotropic material properties. In this paper, we propose a new automatic FE mesh generation scheme to generate a diffusion tensor MRI (DT-MRI) white matter anisotropy content-adaptive FE head model. We term this kind of mesh as wMesh. With this meshing technique, the anisotropic electrical conductivities derived from DT-MRIs can be best incorporated into the model. The influence of the white matter anisotropy on the EEG forward solutions has been studied via our wMesh head models. The scalp potentials computed from the anisotropic wMesh models against those of the isotropic models have been compared. The results describe that there are substantial changes in the scalp electrical potentials between the isotropic and anisotropic models, indicating that the inclusion of the white matter anisotropy is critical for accurate computation of E/MEG forward and inverse solutions. This fully automatic anisotropy-adaptive wMesh meshing scheme could be useful for modeling of individual-specific FE head models with better incorporation of the white matter anisotropic property towards bioelectromagnetic imaging.

  8. THREE-DIMENSIONAL ADAPTIVE MESH REFINEMENT SIMULATIONS OF LONG-DURATION GAMMA-RAY BURST JETS INSIDE MASSIVE PROGENITOR STARS

    SciTech Connect

    Lopez-Camara, D.; Lazzati, Davide; Morsony, Brian J.; Begelman, Mitchell C.

    2013-04-10

    We present the results of special relativistic, adaptive mesh refinement, 3D simulations of gamma-ray burst jets expanding inside a realistic stellar progenitor. Our simulations confirm that relativistic jets can propagate and break out of the progenitor star while remaining relativistic. This result is independent of the resolution, even though the amount of turbulence and variability observed in the simulations is greater at higher resolutions. We find that the propagation of the jet head inside the progenitor star is slightly faster in 3D simulations compared to 2D ones at the same resolution. This behavior seems to be due to the fact that the jet head in 3D simulations can wobble around the jet axis, finding the spot of least resistance to proceed. Most of the average jet properties, such as density, pressure, and Lorentz factor, are only marginally affected by the dimensionality of the simulations and therefore results from 2D simulations can be considered reliable.

  9. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  10. Aeroacoustic Simulation of Nose Landing Gear on Adaptive Unstructured Grids With FUN3D

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.; Khorrami, Mehdi R.; Park, Michael A.; Lockhard, David P.

    2013-01-01

    Numerical simulations have been performed for a partially-dressed, cavity-closed nose landing gear configuration that was tested in NASA Langley s closed-wall Basic Aerodynamic Research Tunnel (BART) and in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D, developed at NASA Langley Research center, is used to compute the unsteady flow field for this configuration. Starting with a coarse grid, a series of successively finer grids were generated using the adaptive gridding methodology available in the FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence model is used for these computations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. In general, the correlation with the experimental data improves with grid refinement. A similar trend is observed for sound pressure levels obtained by using these CFD solutions as input to a FfowcsWilliams-Hawkings noise propagation code to compute the farfield noise levels. In general, the numerical solutions obtained on adapted grids compare well with the hand-tuned enriched fine grid solutions and experimental data. In addition, the grid adaption strategy discussed here simplifies the grid generation process, and results in improved computational efficiency of CFD simulations.

  11. Context-adaptive based CU processing for 3D-HEVC

    PubMed Central

    Shen, Liquan; An, Ping; Liu, Zhi

    2017-01-01

    The 3D High Efficiency Video Coding (3D-HEVC) standard aims to code 3D videos that usually contain multi-view texture videos and its corresponding depth information. It inherits the same quadtree prediction structure of HEVC to code both texture videos and depth maps. Each coding unit (CU) allows recursively splitting into four equal sub-CUs. At each CU depth level, it enables 10 types of inter modes and 35 types of intra modes in inter frames. Furthermore, the inter-view prediction tools are applied to each view in the test model of 3D-HEVC (HTM), which uses variable size disparity-compensated prediction to exploit inter-view correlation within neighbor views. It also exploits redundancies between a texture video and its associated depth using inter-component coding tools. These achieve the highest coding efficiency to code 3D videos but require a very high computational complexity. In this paper, we propose a context-adaptive based fast CU processing algorithm to jointly optimize the most complex components of HTM including CU depth level decision, mode decision, motion estimation (ME) and disparity estimation (DE) processes. It is based on the hypothesis that the optimal CU depth level, prediction mode and motion vector of a CU are correlated with those from spatiotemporal, inter-view and inter-component neighboring CUs. We analyze the video content based on coding information from neighboring CUs and early predict each CU into one of five categories i.e., DE-omitted CU, ME-DE-omitted CU, SPLIT CU, Non-SPLIT CU and normal CU, and then each type of CU adaptively adopts different processing strategies. Experimental results show that the proposed algorithm saves 70% encoder runtime on average with only a 0.1% BD-rate increase on coded views and 0.8% BD-rate increase on synthesized views. Our algorithm outperforms the state-of-the-art algorithms in terms of coding time saving or with better RD performance. PMID:28182719

  12. Joint detection of anatomical points on surface meshes and color images for visual registration of 3D dental models

    NASA Astrophysics Data System (ADS)

    Destrez, Raphaël.; Albouy-Kissi, Benjamin; Treuillet, Sylvie; Lucas, Yves

    2015-04-01

    Computer aided planning for orthodontic treatment requires knowing occlusion of separately scanned dental casts. A visual guided registration is conducted starting by extracting corresponding features in both photographs and 3D scans. To achieve this, dental neck and occlusion surface are firstly extracted by image segmentation and 3D curvature analysis. Then, an iterative registration process is conducted during which feature positions are refined, guided by previously found anatomic edges. The occlusal edge image detection is improved by an original algorithm which follows Canny's poorly detected edges using a priori knowledge of tooth shapes. Finally, the influence of feature extraction and position optimization is evaluated in terms of the quality of the induced registration. Best combination of feature detection and optimization leads to a positioning average error of 1.10 mm and 2.03°.

  13. Adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients

    PubMed Central

    Xia, Kelin; Zhan, Meng; Wan, Decheng; Wei, Guo-Wei

    2011-01-01

    Mesh deformation methods are a versatile strategy for solving partial differential equations (PDEs) with a vast variety of practical applications. However, these methods break down for elliptic PDEs with discontinuous coefficients, namely, elliptic interface problems. For this class of problems, the additional interface jump conditions are required to maintain the well-posedness of the governing equation. Consequently, in order to achieve high accuracy and high order convergence, additional numerical algorithms are required to enforce the interface jump conditions in solving elliptic interface problems. The present work introduces an interface technique based adaptively deformed mesh strategy for resolving elliptic interface problems. We take the advantages of the high accuracy, flexibility and robustness of the matched interface and boundary (MIB) method to construct an adaptively deformed mesh based interface method for elliptic equations with discontinuous coefficients. The proposed method generates deformed meshes in the physical domain and solves the transformed governed equations in the computational domain, which maintains regular Cartesian meshes. The mesh deformation is realized by a mesh transformation PDE, which controls the mesh redistribution by a source term. The source term consists of a monitor function, which builds in mesh contraction rules. Both interface geometry based deformed meshes and solution gradient based deformed meshes are constructed to reduce the L∞ and L2 errors in solving elliptic interface problems. The proposed adaptively deformed mesh based interface method is extensively validated by many numerical experiments. Numerical results indicate that the adaptively deformed mesh based interface method outperforms the original MIB method for dealing with elliptic interface problems. PMID:22586356

  14. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  15. Study of the counting efficiency of a WBC setup by using a computational 3D human body library in sitting position based on polygonal mesh surfaces.

    PubMed

    Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F

    2014-04-01

    A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured.

  16. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  17. An adaptive mesh magneto-hydrodynamic analysis of interstellar clouds

    NASA Astrophysics Data System (ADS)

    Kominsky, Paul J.

    Interstellar clouds play a key role in many astrophysical events. The interactions of dense interstellar clouds with shock waves and interstellar wind were investigated using an adaptive three-dimensional Cartesian mesh approach to the magneto-hydrodynamic equations. The mixing of the cloud material with the post-shock material results in complex layers of current density. In both the shock and wind interactions, a tail develops similar to the tail found with comets due to the solar wind. The orientation of this tail structure changes with the direction of the magnetic field, and may be useful to observationally determining the orientation of magnetic fields in the interstellar medium. The octree data structure was analyzed in regard to parallel work units. Larger block sizes have a higher volume to surface ratio and support a higher percentage of computational cells to non-computational cells, but require more cells at the finest grid resolution. Keeping the minimum resolution of the grid fixed, and averaging over all possible grids, the analysis confirms experience that block sizes larger than 8 × 8 × 8 cells do not improve storage efficiency. A novel algorithm was developed to implement rotationally periodic boundary conditions on quadtree and octree data, structures. Astrophysical flows wit h symmetric circulation, such as accretion disks, or periodic instabilities, such supernova remnants, may be able to take advantage of such boundary conditions while maintaining the other benefits of a Cartesian grid.

  18. CONSTRAINED-TRANSPORT MAGNETOHYDRODYNAMICS WITH ADAPTIVE MESH REFINEMENT IN CHARM

    SciTech Connect

    Miniati, Francesco; Martin, Daniel F. E-mail: DFMartin@lbl.gov

    2011-07-01

    We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.

  19. Numerical study of Taylor bubbles with adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Pavlidis, Dimitrios; Percival, James; Pain, Chris; Matar, Omar; Hasan, Abbas; Azzopardi, Barry

    2014-11-01

    The Taylor bubble is a single long bubble which nearly fills the entire cross section of a liquid-filled circular tube. This type of bubble flow regime often occurs in gas-liquid slug flows in many industrial applications, including oil-and-gas production, chemical and nuclear reactors, and heat exchangers. The objective of this study is to investigate the fluid dynamics of Taylor bubbles rising in a vertical pipe filled with oils of extremely high viscosity (mimicking the ``heavy oils'' found in the oil-and-gas industry). A modelling and simulation framework is presented here which can modify and adapt anisotropic unstructured meshes to better represent the underlying physics of bubble rise and reduce the computational effort without sacrificing accuracy. The numerical framework consists of a mixed control-volume and finite-element formulation, a ``volume of fluid''-type method for the interface capturing based on a compressive control volume advection method, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of Taylor bubbles are presented to show the capability of this method. EPSRC Programme Grant, MEMPHIS, EP/K0039761/1.

  20. Output-Based Adaptive Meshing Applied to Space Launch System Booster Separation Analysis

    NASA Technical Reports Server (NTRS)

    Dalle, Derek J.; Rogers, Stuart E.

    2015-01-01

    This paper presents details of Computational Fluid Dynamic (CFD) simulations of the Space Launch System during solid-rocket booster separation using the Cart3D inviscid code with comparisons to Overflow viscous CFD results and a wind tunnel test performed at NASA Langley Research Center's Unitary PlanWind Tunnel. The Space Launch System (SLS) launch vehicle includes two solid-rocket boosters that burn out before the primary core stage and thus must be discarded during the ascent trajectory. The main challenges for creating an aerodynamic database for this separation event are the large number of basis variables (including orientation of the core, relative position and orientation of the boosters, and rocket thrust levels) and the complex flow caused by the booster separation motors. The solid-rocket boosters are modified from their form when used with the Space Shuttle Launch Vehicle, which has a rich flight history. However, the differences between the SLS core and the Space Shuttle External Tank result in the boosters separating with much narrower clearances, and so reducing aerodynamic uncertainty is necessary to clear the integrated system for flight. This paper discusses an approach that has been developed to analyze about 6000 wind tunnel simulations and 5000 flight vehicle simulations using Cart3D in adaptive-meshing mode. In addition, a discussion is presented of Overflow viscous CFD runs used for uncertainty quantification. Finally, the article presents lessons learned and improvements that will be implemented in future separation databases.

  1. Fast animation of lightning using an adaptive mesh.

    PubMed

    Kim, Theodore; Lin, Ming C

    2007-01-01

    We present a fast method for simulating, animating, and rendering lightning using adaptive grids. The "dielectric breakdown model" is an elegant algorithm for electrical pattern formation that we extend to enable animation of lightning. The simulation can be slow, particularly in 3D, because it involves solving a large Poisson problem. Losasso et al. recently proposed an octree data structure for simulating water and smoke, and we show that this discretization can be applied to the problem of lightning simulation as well. However, implementing the incomplete Cholesky conjugate gradient (ICCG) solver for this problem can be daunting, so we provide an extensive discussion of implementation issues. ICCG solvers can usually be accelerated using "Eisenstat's trick," but the trick cannot be directly applied to the adaptive case. Fortunately, we show that an "almost incomplete Cholesky" factorization can be computed so that Eisenstat's trick can still be used. We then present a fast rendering method based on convolution that is competitive with Monte Carlo ray tracing but orders of magnitude faster, and we also show how to further improve the visual results using jittering.

  2. Adaptive Mesh Refinement for Hyperbolic Partial Differential Equations

    DTIC Science & Technology

    1983-03-01

    grids. We use either the Coarse Mesh Approximation fethod ( Ciment , [1971]) or interpolation from a coarser grid to get the boundary values. In Berger...Problems, Math. Conp. 31 (1977), 333-390. M. Ciment , Stable Difference Schemes with Uneven Mesh Spacings, Math. Comp. 25 (1971), 219-227. H. Cramr

  3. A Mass Conservation Algorithm for Adaptive Unrefinement Meshes Used by Finite Element Methods

    DTIC Science & Technology

    2012-01-01

    dimensional mesh generation. In: Proc. 4th ACM-SIAM Symp. on Disc. Algorithms. (1993) 83–92 [9] Weatherill, N., Hassan, O., Marcum, D., Marchant, M.: Grid ...Conference on Computational Science, ICCS 2012 A Mass Conservation Algorithm For Adaptive Unrefinement Meshes Used By Finite Element Methods Hung V. Nguyen...velocity fields, and chemical distribution, as well as conserve mass, especially for water quality applications. Solution accuracy depends highly on mesh

  4. Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    NASA Astrophysics Data System (ADS)

    Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin

    2016-08-01

    This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.

  5. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.

  6. Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid; Sohn, Andrew

    1996-01-01

    Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.

  7. A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.

    1999-01-01

    The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.

  8. Electrochemical incineration of indigo. A comparative study between 2D (plate) and 3D (mesh) BDD anodes fitted into a filter-press reactor.

    PubMed

    Nava, José L; Sirés, Ignasi; Brillas, Enric

    2014-01-01

    This paper compares the performance of 2D (plate) and 3D (mesh) boron-doped diamond (BDD) electrodes, fitted into a filter-press reactor, during the electrochemical incineration of indigo textile dye as a model organic compound in chloride medium. The electrolyses were carried out in the FM01-LC reactor at mean fluid velocities between 0.9 ≤ u ≤ 10.4 and 1.2 ≤ u ≤ 13.9 cm s(-1) for the 2D BDD and the 3D BDD electrodes, respectively, at current densities of 5.63 and 15 mA cm(-2). The oxidation of the organic matter was promoted, on the one hand, via the physisorbed hydroxyl radicals (BDD(·OH)) formed from water oxidation at the BDD surface and, on the other hand, via active chlorine formed from the oxidation of chloride ions on BDD. The performance of 2D BDD and 3D BDD electrodes in terms of current efficiency, energy consumption, and charge passage during the treatments is discussed.

  9. Development and Verification of Unstructured Adaptive Mesh Technique with Edge Compatibility

    NASA Astrophysics Data System (ADS)

    Ito, Kei; Kunugi, Tomoaki; Ohshima, Hiroyuki

    In the design study of the large-sized sodium-cooled fast reactor (JSFR), one key issue is suppression of gas entrainment (GE) phenomena at a gas-liquid interface. Therefore, the authors have been developed a high-precision CFD algorithm to evaluate the GE phenomena accurately. The CFD algorithm has been developed on unstructured meshes to establish an accurate modeling of JSFR system. For two-phase interfacial flow simulations, a high-precision volume-of-fluid algorithm is employed. It was confirmed that the developed CFD algorithm could reproduce the GE phenomena in a simple GE experiment. Recently, the authors have been developed an important technique for the simulation of the GE phenomena in JSFR. That is an unstructured adaptive mesh technique which can apply fine cells dynamically to the region where the GE occurs in JSFR. In this paper, as a part of the development, a two-dimensional unstructured adaptive mesh technique is discussed. In the two-dimensional adaptive mesh technique, each cell is refined isotropically to reduce distortions of the mesh. In addition, connection cells are formed to eliminate the edge incompatibility between refined and non-refined cells. The two-dimensional unstructured adaptive mesh technique is verified by solving well-known lid-driven cavity flow problem. As a result, the two-dimensional unstructured adaptive mesh technique succeeds in providing a high-precision solution, even though poor-quality distorted initial mesh is employed. In addition, the simulation error on the two-dimensional unstructured adaptive mesh is much less than the error on the structured mesh with a larger number of cells.

  10. 3D design and electric simulation of a silicon drift detector using a spiral biasing adapter

    NASA Astrophysics Data System (ADS)

    Li, Yu-yun; Xiong, Bo; Li, Zheng

    2016-09-01

    The detector system of combining a spiral biasing adapter (SBA) with a silicon drift detector (SBA-SDD) is largely different from the traditional silicon drift detector (SDD), including the spiral SDD. It has a spiral biasing adapter of the same design as a traditional spiral SDD and an SDD with concentric rings having the same radius. Compared with the traditional spiral SDD, the SBA-SDD separates the spiral's functions of biasing adapter and the p-n junction definition. In this paper, the SBA-SDD is simulated using a Sentaurus TCAD tool, which is a full 3D device simulation tool. The simulated electric characteristics include electric potential, electric field, electron concentration, and single event effect. Because of the special design of the SBA-SDD, the SBA can generate an optimum drift electric field in the SDD, comparable with the conventional spiral SDD, while the SDD can be designed with concentric rings to reduce surface area. Also the current and heat generated in the SBA are separated from the SDD. To study the single event response, we simulated the induced current caused by incident heavy ions (20 and 50 μm penetration length) with different linear energy transfer (LET). The SBA-SDD can be used just like a conventional SDD, such as X-ray detector for energy spectroscopy and imaging, etc.

  11. Adaptive Image Enhancement for Tracing 3D Morphologies of Neurons and Brain Vasculatures.

    PubMed

    Zhou, Zhi; Sorensen, Staci; Zeng, Hongkui; Hawrylycz, Michael; Peng, Hanchuan

    2015-04-01

    It is important to digitally reconstruct the 3D morphology of neurons and brain vasculatures. A number of previous methods have been proposed to automate the reconstruction process. However, in many cases, noise and low signal contrast with respect to the image background still hamper our ability to use automation methods directly. Here, we propose an adaptive image enhancement method specifically designed to improve the signal-to-noise ratio of several types of individual neurons and brain vasculature images. Our method is based on detecting the salient features of fibrous structures, e.g. the axon and dendrites combined with adaptive estimation of the optimal context windows where such saliency would be detected. We tested this method for a range of brain image datasets and imaging modalities, including bright-field, confocal and multiphoton fluorescent images of neurons, and magnetic resonance angiograms. Applying our adaptive enhancement to these datasets led to improved accuracy and speed in automated tracing of complicated morphology of neurons and vasculatures.

  12. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  13. Analysis and adaptive synchronization of eight-term 3-D polynomial chaotic systems with three quadratic nonlinearities

    NASA Astrophysics Data System (ADS)

    Vaidyanathan, S.

    2014-06-01

    This paper proposes a eight-term 3-D polynomial chaotic system with three quadratic nonlinearities and describes its properties. The maximal Lyapunov exponent (MLE) of the proposed 3-D chaotic system is obtained as L 1 = 6.5294. Next, new results are derived for the global chaos synchronization of the identical eight-term 3-D chaotic systems with unknown system parameters using adaptive control. Lyapunov stability theory has been applied for establishing the adaptive synchronization results. Numerical simulations are shown using MATLAB to describe the main results derived in this paper.

  14. Model-based adaptive 3D sonar reconstruction in reverberating environments.

    PubMed

    Saucan, Augustin-Alexandru; Sintes, Christophe; Chonavel, Thierry; Caillec, Jean-Marc Le

    2015-10-01

    In this paper, we propose a novel model-based approach for 3D underwater scene reconstruction, i.e., bathymetry, for side scan sonar arrays in complex and highly reverberating environments like shallow water areas. The presence of multipath echoes and volume reverberation generates false depth estimates. To improve the resulting bathymetry, this paper proposes and develops an adaptive filter, based on several original geometrical models. This multimodel approach makes it possible to track and separate the direction of arrival trajectories of multiple echoes impinging the array. Echo tracking is perceived as a model-based processing stage, incorporating prior information on the temporal evolution of echoes in order to reject cluttered observations generated by interfering echoes. The results of the proposed filter on simulated and real sonar data showcase the clutter-free and regularized bathymetric reconstruction. Model validation is carried out with goodness of fit tests, and demonstrates the importance of model-based processing for bathymetry reconstruction.

  15. Using Adaptive Mesh Refinment to Simulate Storm Surge

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; Dawson, C.

    2012-12-01

    Coastal hazards related to strong storms such as hurricanes and typhoons are one of the most frequently recurring and wide spread hazards to coastal communities. Storm surges are among the most devastating effects of these storms, and their prediction and mitigation through numerical simulations is of great interest to coastal communities that need to plan for the subsequent rise in sea level during these storms. Unfortunately these simulations require a large amount of resolution in regions of interest to capture relevant effects resulting in a computational cost that may be intractable. This problem is exacerbated in situations where a large number of similar runs is needed such as in design of infrastructure or forecasting with ensembles of probable storms. One solution to address the problem of computational cost is to employ adaptive mesh refinement (AMR) algorithms. AMR functions by decomposing the computational domain into regions which may vary in resolution as time proceeds. Decomposing the domain as the flow evolves makes this class of methods effective at ensuring that computational effort is spent only where it is needed. AMR also allows for placement of computational resolution independent of user interaction and expectation of the dynamics of the flow as well as particular regions of interest such as harbors. The simulation of many different applications have only been made possible by using AMR-type algorithms, which have allowed otherwise impractical simulations to be performed for much less computational expense. Our work involves studying how storm surge simulations can be improved with AMR algorithms. We have implemented relevant storm surge physics in the GeoClaw package and tested how Hurricane Ike's surge into Galveston Bay and up the Houston Ship Channel compares to available tide gauge data. We will also discuss issues dealing with refinement criteria, optimal resolution and refinement ratios, and inundation.

  16. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  17. A Parallel Implementation of Multilevel Recursive Spectral Bisection for Application to Adaptive Unstructured Meshes. Chapter 1

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen T.; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    The design of a parallel implementation of multilevel recursive spectral bisection is described. The goal is to implement a code that is fast enough to enable dynamic repartitioning of adaptive meshes.

  18. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  19. Adaptive-mesh-refinement simulation of partial coalescence cascade of a droplet at a liquid-liquid interface

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Bolster, Diogo

    2016-11-01

    A three-dimensional (3D) adaptive mesh refinement (AMR) algorithm on structured Cartesian grids is developed, and supplemented by a mesoscopic multiphase-flow solver based on state-of-the-art lattice Boltzmann methods (LBM). Using this in-house AMR-LBM routine, we present fully 3D simulations of partial coalescence of a liquid drop with an initially flat interface at small Ohnesorge and Bond numbers. Qualitatively, our numerical simulations are in excellent agreement with experimental observations. Partial coalescence cascades are successfully observed at very small Ohnesorge numbers (Oh 10-4). The fact that the partial coalescence is absent in similar 2D simulations suggests that the Rayleigh-Plateau instability may be the principle driving mechanism responsible for this phenomenon.

  20. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems.

    PubMed

    Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y

    2006-12-07

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  1. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems

    NASA Astrophysics Data System (ADS)

    Lee, W. H.; Kim, T.-S.; Cho, M. H.; Ahn, Y. B.; Lee, S. Y.

    2006-12-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems.

  2. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    SciTech Connect

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.

  3. Composite-Grid Techniques and Adaptive Mesh Refinement in Computational Fluid Dynamics

    DTIC Science & Technology

    1990-01-01

    the equations govern- ing the flow. The patched adaptive mesh refinement technique, devised at Stanford by Oliger, et al ., copes with these sources of...patched adaptive mesh refinement technique, devised at Stanford by Oliger et al . [OL184], copes with these sources of error efficiently by refining...differential equation, as in the numerical grid generation methods proposed by Thompson et al . [THO85], or simply a list of pairs of points in

  4. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    SciTech Connect

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  5. Adaptive meshing technique applied to an orthopaedic finite element contact problem.

    PubMed

    Roarty, Colleen M; Grosland, Nicole M

    2004-01-01

    Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.

  6. Development of a scalable gas-dynamics solver with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Korkut, Burak

    There are various computational physics areas in which Direct Simulation Monte Carlo (DSMC) and Particle in Cell (PIC) methods are being employed. The accuracy of results from such simulations depend on the fidelity of the physical models being used. The computationally demanding nature of these problems make them ideal candidates to make use of modern supercomputers. The software developed to run such simulations also needs special attention so that the maintainability and extendability is considered with the recent numerical methods and programming paradigms. Suited for gas-dynamics problems, a software called SUGAR (Scalable Unstructured Gas dynamics with Adaptive mesh Refinement) has recently been developed and written in C++ and MPI. Physical and numerical models were added to this framework to simulate ion thruster plumes. SUGAR is used to model the charge-exchange (CEX) reactions occurring between the neutral and ion species as well as the induced electric field effect due to ions. Multiple adaptive mesh refinement (AMR) meshes were used in order to capture different physical length scales present in the flow. A multiple-thruster configuration was run to extend the studies to cases for which there is no axial or radial symmetry present that could only be modeled with a three-dimensional simulation capability. The combined plume structure showed interactions between individual thrusters where AMR capability captured this in an automated way. The back flow for ions was found to occur when CEX and momentum-exchange (MEX) collisions are present and strongly enhanced when the induced electric field is considered. The ion energy distributions in the back flow region were obtained and it was found that the inclusion of the electric field modeling is the most important factor in determining its shape. The plume back flow structure was also examined for a triple-thruster, 3-D geometry case and it was found that the ion velocity in the back flow region appears to be

  7. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  8. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric...order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing...adaptive mesh refinement, discontinuous Galerkin method, non-conforming mesh, IMEX, compressible Euler equations, atmospheric simulations 1. Introduction

  9. Drag Prediction for the DLR-F6 Wing/Body and DPW Wing using CFL3D and OVERFLOW Overset Mesh

    NASA Technical Reports Server (NTRS)

    Sclanfani, Anthony J.; Vassberg, John C.; Harrison, Neal A.; DeHaan, Mark A.; Rumsey, Christopher L.; Rivers, S. Melissa; Morrison, Joseph H.

    2007-01-01

    A series of overset grids was generated in response to the 3rd AIAA CFD Drag Prediction Workshop (DPW-III) which preceded the 25th Applied Aerodynamics Conference in June 2006. DPW-III focused on accurate drag prediction for wing/body and wing-alone configurations. The grid series built for each configuration consists of a coarse, medium, fine, and extra-fine mesh. The medium mesh is first constructed using the current state of best practices for overset grid generation. The medium mesh is then coarsened and enhanced by applying a factor of 1.5 to each (I,J,K) dimension. The resulting set of parametrically equivalent grids increase in size by a factor of roughly 3.5 from one level to the next denser level. CFD simulations were performed on the overset grids using two different RANS flow solvers: CFL3D and OVERFLOW. The results were post-processed using Richardson extrapolation to approximate grid converged values of lift, drag, pitching moment, and angle-of-attack at the design condition. This technique appears to work well if the solution does not contain large regions of separated flow (similar to that seen n the DLR-F6 results) and appropriate grid densities are selected. The extra-fine grid data helped to establish asymptotic grid convergence for both the OVERFLOW FX2B wing/body results and the OVERFLOW DPW-W1/W2 wing-alone results. More CFL3D data is needed to establish grid convergence trends. The medium grid was utilized beyond the grid convergence study by running each configuration at several angles-of-attack so drag polars and lift/pitching moment curves could be evaluated. The alpha sweep results are used to compare data across configurations as well as across flow solvers. With the exception of the wing/body drag polar, the two codes compare well qualitatively showing consistent incremental trends and similar wing pressure comparisons.

  10. Drag Prediction for the DLR-F4 Wing/Body using OVERFLOW and CFL3D on an Overset Mesh

    NASA Technical Reports Server (NTRS)

    Vassberg, John C.; Buning, Pieter G.; Rumsey, Christopher L.

    2002-01-01

    This paper reviews the importance of numerical drag prediction in an aircraft design environment. A chronicle of collaborations between the authors and colleagues is discussed. This retrospective provides a road-map which illustrates some of the actions taken in the past seven years in pursuit of accurate drag prediction. The advances made possible through these collaborations have changed the manner in which business is conducted during the design of all-new aircraft. The subject of this study is the DLR-F4 wing/body transonic model. Specifically, the work conducted herein was in support of the 1st CFD Drag Prediction Workshop, which was held in conjunction with the 19th Applied Aerodynamics Conference in Anaheim, CA during June, 2001. Comprehensive sets of OVERFLOW simulations were independently performed by several users on a variety of computational platforms. CFL3D was used on a limited basis for additional comparison on the same overset mesh. Drag polars based on this database were constructed with a CFD-to-Test correction applied and compared with test data from three facilities. These comparisons show that the predicted drag polars fall inside the scatter band of the test data, at least for pre-buffet conditions. This places the corrected drag levels within 1% of the averaged experimental values. At the design point, the OVERFLOW and CFL3D drag predictions are within 1-2% of each other. In addition, drag-rise characteristics and a boundary of drag-divergence Mach number are presented.

  11. Consistent properties reconstruction on adaptive Cartesian meshes for complex fluids computations

    SciTech Connect

    Xia, Guoping . E-mail: xiag@purdue.edu; Li, Ding; Merkle, Charles L.

    2007-07-01

    An efficient reconstruction procedure for evaluating the constitutive properties of a complex fluid from general or specialized thermodynamic databases is presented. Properties and their pertinent derivatives are evaluated by means of an adaptive Cartesian mesh in the thermodynamic plane that provides user-specified accuracy over any selected domain. The Cartesian grid produces a binary tree data structure whose search efficiency is competitive with that for an equally spaced table or with simple equations of state such as a perfect gas. Reconstruction is accomplished on a triangular subdivision of the 2D Cartesian mesh that ensures function continuity across cell boundaries in equally and unequally spaced portions of the table to C {sup 0}, C {sup 1} or C {sup 2} levels. The C {sup 0} and C {sup 1} reconstructions fit the equation of state and enthalpy relations separately, while the C {sup 2} reconstruction fits the Helmholtz or Gibbs function enabling EOS/enthalpy consistency also. All three reconstruction levels appear effective for CFD solutions obtained to date. The efficiency of the method is demonstrated through storage and data retrieval examples for air, water and carbon dioxide. The time required for property evaluations is approximately two orders of magnitude faster with the reconstruction procedure than with the complete thermodynamic equations resulting in estimated 3D CFD savings of from 30 to 60. Storage requirements are modest for today's computers, with the C {sup 1} method requiring slightly less storage than those for the C {sup 0} and C {sup 2} reconstructions when the same accuracy is specified. Sample fluid dynamic calculations based upon the procedure show that the C {sup 1} and C {sup 2} methods are approximately a factor of two slower than the C {sup 0} method but that the reconstruction procedure enables arbitrary fluid CFD calculations that are as efficient as those for a perfect gas or an incompressible fluid for all three accuracy

  12. Capabilities of wind tunnels with two-adaptive walls to minimize boundary interference in 3-D model testing

    NASA Technical Reports Server (NTRS)

    Rebstock, Rainer; Lee, Edwin E., Jr.

    1989-01-01

    An initial wind tunnel test was made to validate a new wall adaptation method for 3-D models in test sections with two adaptive walls. First part of the adaptation strategy is an on-line assessment of wall interference at the model position. The wall induced blockage was very small at all test conditions. Lift interference occurred at higher angles of attack with the walls set aerodynamically straight. The adaptation of the top and bottom tunnel walls is aimed at achieving a correctable flow condition. The blockage was virtually zero throughout the wing planform after the wall adjustment. The lift curve measured with the walls adapted agreed very well with interference free data for Mach 0.7, regardless of the vertical position of the wing in the test section. The 2-D wall adaptation can significantly improve the correctability of 3-D model data. Nevertheless, residual spanwise variations of wall interference are inevitable.

  13. Zonal multigrid solution of compressible flow problems on unstructured and adaptive meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1989-01-01

    The simultaneous use of adaptive meshing techniques with a multigrid strategy for solving the 2-D Euler equations in the context of unstructured meshes is studied. To obtain optimal efficiency, methods capable of computing locally improved solutions without recourse to global recalculations are pursued. A method for locally refining an existing unstructured mesh, without regenerating a new global mesh is employed, and the domain is automatically partitioned into refined and unrefined regions. Two multigrid strategies are developed. In the first, time-stepping is performed on a global fine mesh covering the entire domain, and convergence acceleration is achieved through the use of zonal coarse grid accelerator meshes, which lie under the adaptively refined regions of the global fine mesh. Both schemes are shown to produce similar convergence rates to each other, and also with respect to a previously developed global multigrid algorithm, which performs time-stepping throughout the entire domain, on each mesh level. However, the present schemes exhibit higher computational efficiency due to the smaller number of operations on each level.

  14. Standard and goal-oriented adaptive mesh refinement applied to radiation transport on 2D unstructured triangular meshes

    SciTech Connect

    Wang Yaqi; Ragusa, Jean C.

    2011-02-01

    Standard and goal-oriented adaptive mesh refinement (AMR) techniques are presented for the linear Boltzmann transport equation. A posteriori error estimates are employed to drive the AMR process and are based on angular-moment information rather than on directional information, leading to direction-independent adapted meshes. An error estimate based on a two-mesh approach and a jump-based error indicator are compared for various test problems. In addition to the standard AMR approach, where the global error in the solution is diminished, a goal-oriented AMR procedure is devised and aims at reducing the error in user-specified quantities of interest. The quantities of interest are functionals of the solution and may include, for instance, point-wise flux values or average reaction rates in a subdomain. A high-order (up to order 4) Discontinuous Galerkin technique with standard upwinding is employed for the spatial discretization; the discrete ordinates method is used to treat the angular variable.

  15. Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.

    2014-05-01

    Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting

  16. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  17. Adaptive, Tactical Mesh Networking: Control Base MANET Model

    DTIC Science & Technology

    2010-09-01

    pp. 316–320 Available: IEEE Xplore , http://ieeexplore.ieee.org [Accessed: June 9, 2010]. [5] N. Sidiropoulos, “Multiuser Transmit Beamforming...Mobile Mesh Segments of TNT Testbed .......... 11 Figure 5. Infrastructure and Ad Hoc Mode of IEEE 802.11................................ 13 Figure...6. The Power Spectral Density of OFDM................................................ 14 Figure 7. A Typical IEEE 802.16 Network

  18. Experiences with an adaptive mesh refinement algorithm in numerical relativity.

    NASA Astrophysics Data System (ADS)

    Choptuik, M. W.

    An implementation of the Berger/Oliger mesh refinement algorithm for a model problem in numerical relativity is described. The principles of operation of the method are reviewed and its use in conjunction with leap-frog schemes is considered. The performance of the algorithm is illustrated with results from a study of the Einstein/massless scalar field equations in spherical symmetry.

  19. Modelling of fluid-solid interactions using an adaptive mesh fluid model coupled with a combined finite-discrete element model

    NASA Astrophysics Data System (ADS)

    Viré, Axelle; Xiang, Jiansheng; Milthaler, Frank; Farrell, Patrick Emmet; Piggott, Matthew David; Latham, John-Paul; Pavlidis, Dimitrios; Pain, Christopher Charles

    2012-12-01

    Fluid-structure interactions are modelled by coupling the finite element fluid/ocean model `Fluidity-ICOM' with a combined finite-discrete element solid model `Y3D'. Because separate meshes are used for the fluids and solids, the present method is flexible in terms of discretisation schemes used for each material. Also, it can tackle multiple solids impacting on one another, without having ill-posed problems in the resolution of the fluid's equations. Importantly, the proposed approach ensures that Newton's third law is satisfied at the discrete level. This is done by first computing the action-reaction force on a supermesh, i.e. a function superspace of the fluid and solid meshes, and then projecting it to both meshes to use it as a source term in the fluid and solid equations. This paper demonstrates the properties of spatial conservation and accuracy of the method for a sphere immersed in a fluid, with prescribed fluid and solid velocities. While spatial conservation is shown to be independent of the mesh resolutions, accuracy requires fine resolutions in both fluid and solid meshes. It is further highlighted that unstructured meshes adapted to the solid concentration field reduce the numerical errors, in comparison with uniformly structured meshes with the same number of elements. The method is verified on flow past a falling sphere. Its potential for ocean applications is further shown through the simulation of vortex-induced vibrations of two cylinders and the flow past two flexible fibres.

  20. Parallel Adaptive Mesh Refinement for High-Order Finite-Volume Schemes in Computational Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Schwing, Alan Michael

    For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable

  1. Adaptive Kalman snake for semi-autonomous 3D vessel tracking.

    PubMed

    Lee, Sang-Hoon; Lee, Sanghoon

    2015-10-01

    In this paper, we propose a robust semi-autonomous algorithm for 3D vessel segmentation and tracking based on an active contour model and a Kalman filter. For each computed tomography angiography (CTA) slice, we use the active contour model to segment the vessel boundary and the Kalman filter to track position and shape variations of the vessel boundary between slices. For successful segmentation via active contour, we select an adequate number of initial points from the contour of the first slice. The points are set manually by user input for the first slice. For the remaining slices, the initial contour position is estimated autonomously based on segmentation results of the previous slice. To obtain refined segmentation results, an adaptive control spacing algorithm is introduced into the active contour model. Moreover, a block search-based initial contour estimation procedure is proposed to ensure that the initial contour of each slice can be near the vessel boundary. Experiments were performed on synthetic and real chest CTA images. Compared with the well-known Chan-Vese (CV) model, the proposed algorithm exhibited better performance in segmentation and tracking. In particular, receiver operating characteristic analysis on the synthetic and real CTA images demonstrated the time efficiency and tracking robustness of the proposed model. In terms of computational time redundancy, processing time can be effectively reduced by approximately 20%.

  2. Multigroup radiation hydrodynamics with flux-limited diffusion and adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    González, M.; Vaytet, N.; Commerçon, B.; Masson, J.

    2015-06-01

    Context. Radiative transfer plays a crucial role in the star formation process. Because of the high computational cost, radiation-hydrodynamics simulations performed up to now have mainly been carried out in the grey approximation. In recent years, multifrequency radiation-hydrodynamics models have started to be developed in an attempt to better account for the large variations in opacities as a function of frequency. Aims: We wish to develop an efficient multigroup algorithm for the adaptive mesh refinement code RAMSES which is suited to heavy proto-stellar collapse calculations. Methods: Because of the prohibitive timestep constraints of an explicit radiative transfer method, we constructed a time-implicit solver based on a stabilized bi-conjugate gradient algorithm, and implemented it in RAMSES under the flux-limited diffusion approximation. Results: We present a series of tests that demonstrate the high performance of our scheme in dealing with frequency-dependent radiation-hydrodynamic flows. We also present a preliminary simulation of a 3D proto-stellar collapse using 20 frequency groups. Differences between grey and multigroup results are briefly discussed, and the large amount of information this new method brings us is also illustrated. Conclusions: We have implemented a multigroup flux-limited diffusion algorithm in the RAMSES code. The method performed well against standard radiation-hydrodynamics tests, and was also shown to be ripe for exploitation in the computational star formation context.

  3. Adaptive Iterative Dose Reduction Using Three Dimensional Processing (AIDR3D) Improves Chest CT Image Quality and Reduces Radiation Exposure

    PubMed Central

    Yamashiro, Tsuneo; Miyara, Tetsuhiro; Honda, Osamu; Kamiya, Hisashi; Murata, Kiyoshi; Ohno, Yoshiharu; Tomiyama, Noriyuki; Moriya, Hiroshi; Koyama, Mitsuhiro; Noma, Satoshi; Kamiya, Ayano; Tanaka, Yuko; Murayama, Sadayuki

    2014-01-01

    Objective To assess the advantages of Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D) for image quality improvement and dose reduction for chest computed tomography (CT). Methods Institutional Review Boards approved this study and informed consent was obtained. Eighty-eight subjects underwent chest CT at five institutions using identical scanners and protocols. During a single visit, each subject was scanned using different tube currents: 240, 120, and 60 mA. Scan data were converted to images using AIDR3D and a conventional reconstruction mode (without AIDR3D). Using a 5-point scale from 1 (non-diagnostic) to 5 (excellent), three blinded observers independently evaluated image quality for three lung zones, four patterns of lung disease (nodule/mass, emphysema, bronchiolitis, and diffuse lung disease), and three mediastinal measurements (small structure visibility, streak artifacts, and shoulder artifacts). Differences in these scores were assessed by Scheffe's test. Results At each tube current, scans using AIDR3D had higher scores than those without AIDR3D, which were significant for lung zones (p<0.0001) and all mediastinal measurements (p<0.01). For lung diseases, significant improvements with AIDR3D were frequently observed at 120 and 60 mA. Scans with AIDR3D at 120 mA had significantly higher scores than those without AIDR3D at 240 mA for lung zones and mediastinal streak artifacts (p<0.0001), and slightly higher or equal scores for all other measurements. Scans with AIDR3D at 60 mA were also judged superior or equivalent to those without AIDR3D at 120 mA. Conclusion For chest CT, AIDR3D provides better image quality and can reduce radiation exposure by 50%. PMID:25153797

  4. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    PubMed

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions.

  5. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  6. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2017-04-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  7. Output-based mesh adaptation for high order Navier-Stokes simulations on deformable domains

    NASA Astrophysics Data System (ADS)

    Kast, Steven M.; Fidkowski, Krzysztof J.

    2013-11-01

    We present an output-based mesh adaptation strategy for Navier-Stokes simulations on deforming domains. The equations are solved with an arbitrary Lagrangian-Eulerian (ALE) approach, using a discontinuous Galerkin finite-element discretization in both space and time. Discrete unsteady adjoint solutions, derived for both the state and the geometric conservation law, provide output error estimates and drive adaptation of the space-time mesh. Spatial adaptation consists of dynamic order increment or decrement on a fixed tessellation of the domain, while a combination of coarsening and refinement is used to provide an efficient time step distribution. Results from compressible Navier-Stokes simulations in both two and three dimensions demonstrate the accuracy and efficiency of the proposed approach. In particular, the method is shown to outperform other common adaptation strategies, which, while sometimes adequate for static problems, struggle in the presence of mesh motion.

  8. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the ;exact; adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  9. A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids

    NASA Astrophysics Data System (ADS)

    Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar

    2014-11-01

    We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.

  10. Controllable liquid crystal gratings for an adaptive 2D/3D auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Zhang, Y. A.; Jin, T.; He, L. C.; Chu, Z. H.; Guo, T. L.; Zhou, X. T.; Lin, Z. X.

    2017-02-01

    2D/3D switchable, viewpoint controllable and 2D/3D localizable auto-stereoscopic displays based on controllable liquid crystal gratings are proposed in this work. Using the dual-layer staggered structure on the top substrate and bottom substrate as driven electrodes within a liquid crystal cell, the ratio between transmitting region and shielding region can be selectively controlled by the corresponding driving circuit, which indicates that 2D/3D switch and 3D video sources with different disparity images can reveal in the same auto-stereoscopic display system. Furthermore, the controlled region in the liquid crystal gratings presents 3D model while other regions maintain 2D model in the same auto-stereoscopic display by the corresponding driving circuit. This work demonstrates that the controllable liquid crystal gratings have potential applications in the field of auto-stereoscopic display.

  11. CT image artifacts from brachytherapy seed implants: A postprocessing 3D adaptive median filter

    SciTech Connect

    Basran, Parminder S.; Robertson, Andrew; Wells, Derek

    2011-02-15

    Purpose: To design a postprocessing 3D adaptive median filter that minimizes streak artifacts and improves soft-tissue contrast in postoperative CT images of brachytherapy seed implantations. Methods: The filter works by identifying voxels that are likely streaks and estimating more reflective voxel intensity by using voxel intensities in adjacent CT slices and applying a median filter over voxels not identified as seeds. Median values are computed over a 5x5x5 mm region of interest (ROI) within the CT volume. An acrylic phantom simulating a clinical seed implant arrangement and containing nonradioactive seeds was created. Low contrast subvolumes of tissuelike material were also embedded in the phantom. Pre- and postprocessed image quality metrics were compared using the standard deviation of ROIs between the seeds, the CT numbers of low contrast ROIs embedded within the phantom, the signal to noise ratio (SNR), and the contrast to noise ratio (CNR) of the low contrast ROIs. The method was demonstrated with a clinical postimplant CT dataset. Results: After the filter was applied, the standard deviation of CT values in streak artifact regions was significantly reduced from 76.5 to 7.2 HU. Within the observable low contrast plugs, the mean of all ROI standard deviations was significantly reduced from 60.5 to 3.9 HU, SNR significantly increased from 2.3 to 22.4, and CNR significantly increased from 0.2 to 4.1 (all P<0.01). The mean CT in the low contrast plugs remained within 5 HU of the original values. Conclusion: An efficient postprocessing filter that does not require access to projection data, which can be applied irrespective of CT scan parameters has been developed, provided the slice thickness and spacing is 3 mm or less.

  12. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets.

  13. Novel and powerful 3D adaptive crisp active contour method applied in the segmentation of CT lung images.

    PubMed

    Rebouças Filho, Pedro Pedrosa; Cortez, Paulo César; da Silva Barros, Antônio C; C Albuquerque, Victor Hugo; R S Tavares, João Manuel

    2017-01-01

    The World Health Organization estimates that 300 million people have asthma, 210 million people have Chronic Obstructive Pulmonary Disease (COPD), and, according to WHO, COPD will become the third major cause of death worldwide in 2030. Computational Vision systems are commonly used in pulmonology to address the task of image segmentation, which is essential for accurate medical diagnoses. Segmentation defines the regions of the lungs in CT images of the thorax that must be further analyzed by the system or by a specialist physician. This work proposes a novel and powerful technique named 3D Adaptive Crisp Active Contour Method (3D ACACM) for the segmentation of CT lung images. The method starts with a sphere within the lung to be segmented that is deformed by forces acting on it towards the lung borders. This process is performed iteratively in order to minimize an energy function associated with the 3D deformable model used. In the experimental assessment, the 3D ACACM is compared against three approaches commonly used in this field: the automatic 3D Region Growing, the level-set algorithm based on coherent propagation and the semi-automatic segmentation by an expert using the 3D OsiriX toolbox. When applied to 40 CT scans of the chest the 3D ACACM had an average F-measure of 99.22%, revealing its superiority and competency to segment lungs in CT images.

  14. A semi-automatic 2D-to-3D video conversion with adaptive key-frame selection

    NASA Astrophysics Data System (ADS)

    Ju, Kuanyu; Xiong, Hongkai

    2014-11-01

    To compensate the deficit of 3D content, 2D to 3D video conversion (2D-to-3D) has recently attracted more attention from both industrial and academic communities. The semi-automatic 2D-to-3D conversion which estimates corresponding depth of non-key-frames through key-frames is more desirable owing to its advantage of balancing labor cost and 3D effects. The location of key-frames plays a role on quality of depth propagation. This paper proposes a semi-automatic 2D-to-3D scheme with adaptive key-frame selection to keep temporal continuity more reliable and reduce the depth propagation errors caused by occlusion. The potential key-frames would be localized in terms of clustered color variation and motion intensity. The distance of key-frame interval is also taken into account to keep the accumulated propagation errors under control and guarantee minimal user interaction. Once their depth maps are aligned with user interaction, the non-key-frames depth maps would be automatically propagated by shifted bilateral filtering. Considering that depth of objects may change due to the objects motion or camera zoom in/out effect, a bi-directional depth propagation scheme is adopted where a non-key frame is interpolated from two adjacent key frames. The experimental results show that the proposed scheme has better performance than existing 2D-to-3D scheme with fixed key-frame interval.

  15. High hardness BaCb-(BxOy/BN) composites with 3D mesh-like fine grain-boundary structure by reactive spark plasma sintering.

    PubMed

    Vasylkiv, Oleg; Borodianska, Hanna; Badica, Petre; Grasso, Salvatore; Sakka, Yoshio; Tok, Alfred; Su, Liap Tat; Bosman, Michael; Ma, Jan

    2012-02-01

    Boron carbide B4C powders were subject to reactive spark plasma sintering (also known as field assisted sintering, pulsed current sintering or plasma assisted sintering) under nitrogen atmosphere. For an optimum hexagonal BN (h-BN) content estimated from X-ray diffraction measurements at approximately 0.4 wt%, the as-prepared BaCb-(BxOy/BN) ceramic shows values of Berkovich and Vickers hardness of 56.7 +/- 3.1 GPa and 39.3 +/- 7.6 GPa, respectively. These values are higher than for the vacuum SPS processed B4C pristine sample and the h-BN -mechanically-added samples. XRD and electronic microscopy data suggest that in the samples produced by reactive SPS in N2 atmosphere, and containing an estimated amount of 0.3-1.5% h-BN, the crystallite size of the boron carbide grains is decreasing with the increasing amount of N2, while for the newly formed lamellar h-BN the crystallite size is almost constant (approximately 30-50 nm). BN is located at the grain boundaries between the boron carbide grains and it is wrapped and intercalated by a thin layer of boron oxide. BxOy/BN forms a fine and continuous 3D mesh-like structure that is a possible reason for good mechanical properties.

  16. Efficient global wave propagation adapted to 3-D structural complexity: a pseudospectral/spectral-element approach

    NASA Astrophysics Data System (ADS)

    Leng, Kuangdai; Nissen-Meyer, Tarje; van Driel, Martin

    2016-12-01

    We present a new, computationally efficient numerical method to simulate global seismic wave propagation in realistic 3-D Earth models. We characterize the azimuthal dependence of 3-D wavefields in terms of Fourier series, such that the 3-D equations of motion reduce to an algebraic system of coupled 2-D meridian equations, which is then solved by a 2-D spectral element method (SEM). Computational efficiency of such a hybrid method stems from lateral smoothness of 3-D Earth models and axial singularity of seismic point sources, which jointly confine the Fourier modes of wavefields to a few lower orders. We show novel benchmarks for global wave solutions in 3-D structures between our method and an independent, fully discretized 3-D SEM with remarkable agreement. Performance comparisons are carried out on three state-of-the-art tomography models, with seismic period ranging from 34 s down to 11 s. It turns out that our method has run up to two orders of magnitude faster than the 3-D SEM, featured by a computational advantage expanding with seismic frequency.

  17. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    NASA Astrophysics Data System (ADS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  18. Adaptive unstructured meshing for thermal stress analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Dechaumphai, Pramote

    1992-01-01

    An adaptive unstructured meshing technique for mechanical and thermal stress analysis of built-up structures has been developed. A triangular membrane finite element and a new plate bending element are evaluated on a panel with a circular cutout and a frame stiffened panel. The adaptive unstructured meshing technique, without a priori knowledge of the solution to the problem, generates clustered elements only where needed. An improved solution accuracy is obtained at a reduced problem size and analysis computational time as compared to the results produced by the standard finite element procedure.

  19. Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2014-01-01

    This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.

  20. Using Multi-threading for the Automatic Load Balancing of 2D Adaptive Finite Element Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Thulasiraman, Parimala; Gao, Guang R.; Saini, Subhash (Technical Monitor)

    1998-01-01

    In this paper, we present a multi-threaded approach for the automatic load balancing of adaptive finite element (FE) meshes The platform of our choice is the EARTH multi-threaded system which offers sufficient capabilities to tackle this problem. We implement the adaption phase of FE applications oil triangular meshes and exploit the EARTH token mechanism to automatically balance the resulting irregular and highly nonuniform workload. We discuss the results of our experiments oil EARTH-SP2, on implementation of EARTH on the IBM SP2 with different load balancing strategies that are built into the runtime system.

  1. Thickness-based adaptive mesh refinement methods for multi-phase flow simulations with thin regions

    SciTech Connect

    Chen, Xiaodong; Yang, Vigor

    2014-07-15

    In numerical simulations of multi-scale, multi-phase flows, grid refinement is required to resolve regions with small scales. A notable example is liquid-jet atomization and subsequent droplet dynamics. It is essential to characterize the detailed flow physics with variable length scales with high fidelity, in order to elucidate the underlying mechanisms. In this paper, two thickness-based mesh refinement schemes are developed based on distance- and topology-oriented criteria for thin regions with confining wall/plane of symmetry and in any situation, respectively. Both techniques are implemented in a general framework with a volume-of-fluid formulation and an adaptive-mesh-refinement capability. The distance-oriented technique compares against a critical value, the ratio of an interfacial cell size to the distance between the mass center of the cell and a reference plane. The topology-oriented technique is developed from digital topology theories to handle more general conditions. The requirement for interfacial mesh refinement can be detected swiftly, without the need of thickness information, equation solving, variable averaging or mesh repairing. The mesh refinement level increases smoothly on demand in thin regions. The schemes have been verified and validated against several benchmark cases to demonstrate their effectiveness and robustness. These include the dynamics of colliding droplets, droplet motions in a microchannel, and atomization of liquid impinging jets. Overall, the thickness-based refinement technique provides highly adaptive meshes for problems with thin regions in an efficient and fully automatic manner.

  2. Three dimensional hydrodynamic calculations with adaptive mesh refinement of the evolution of Rayleigh Taylor and Richtmyer Meshkov instabilities in converging geometry: Multi-mode perturbations

    SciTech Connect

    Klein, R.I. |; Bell, J.; Pember, R.; Kelleher, T.

    1993-04-01

    The authors present results for high resolution hydrodynamic calculations of the growth and development of instabilities in shock driven imploding spherical geometries in both 2D and 3D. They solve the Eulerian equations of hydrodynamics with a high order Godunov approach using local adaptive mesh refinement to study the temporal and spatial development of the turbulent mixing layer resulting from both Richtmyer Meshkov and Rayleigh Taylor instabilities. The use of a high resolution Eulerian discretization with adaptive mesh refinement permits them to study the detailed three-dimensional growth of multi-mode perturbations far into the non-linear regime for converging geometries. They discuss convergence properties of the simulations by calculating global properties of the flow. They discuss the time evolution of the turbulent mixing layer and compare its development to a simple theory for a turbulent mix model in spherical geometry based on Plesset`s equation. Their 3D calculations show that the constant found in the planar incompressible experiments of Read and Young`s may not be universal for converging compressible flow. They show the 3D time trace of transitional onset to a mixing state using the temporal evolution of volume rendered imaging. Their preliminary results suggest that the turbulent mixing layer loses memory of its initial perturbations for classical Richtmyer Meshkov and Rayleigh Taylor instabilities in spherically imploding shells. They discuss the time evolution of mixed volume fraction and the role of vorticity in converging 3D flows in enhancing the growth of a turbulent mixing layer.

  3. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  4. Eutectic pattern transition under different temperature gradients: A phase field study coupled with the parallel adaptive-mesh-refinement algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Guo, Z.; Xiong, S.-M.

    2017-03-01

    Eutectic pattern transition under an externally imposed temperature gradient was studied using the phase field method coupled with a novel parallel adaptive-mesh-refinement (Para-AMR) algorithm. Numerical tests revealed that the Para-AMR algorithm could improve the computational efficiency by two orders of magnitude and thus made it possible to perform large-scale simulations without any compromising accuracy. Results showed that the direction of the temperature gradient played a crucial role in determining the eutectic patterns during solidification, which agreed well with experimental observations. In particular, the presence of the transverse temperature gradient could tilt the eutectic patterns, and in 3D simulations, the eutectic microstructure would alter from lamellar to rod-like and/or from rod-like to dumbbell-shaped. Furthermore, under a radial temperature gradient, the eutectic would evolve from a dumbbell-shaped or clover-shaped pattern to an isolated rod-like pattern.

  5. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  6. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    SciTech Connect

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.

  7. Adaptive mesh refinement and multilevel iteration for multiphase, multicomponent flow in porous media

    SciTech Connect

    Hornung, R.D.

    1996-12-31

    An adaptive local mesh refinement (AMR) algorithm originally developed for unsteady gas dynamics is extended to multi-phase flow in porous media. Within the AMR framework, we combine specialized numerical methods to treat the different aspects of the partial differential equations. Multi-level iteration and domain decomposition techniques are incorporated to accommodate elliptic/parabolic behavior. High-resolution shock capturing schemes are used in the time integration of the hyperbolic mass conservation equations. When combined with AMR, these numerical schemes provide high resolution locally in a more efficient manner than if they were applied on a uniformly fine computational mesh. We will discuss the interplay of physical, mathematical, and numerical concerns in the application of adaptive mesh refinement to flow in porous media problems of practical interest.

  8. General relativistic hydrodynamics with Adaptive-Mesh Refinement (AMR) and modeling of accretion disks

    NASA Astrophysics Data System (ADS)

    Donmez, Orhan

    We present a general procedure to solve the General Relativistic Hydrodynamical (GRH) equations with Adaptive-Mesh Refinement (AMR) and model of an accretion disk around a black hole. To do this, the GRH equations are written in a conservative form to exploit their hyperbolic character. The numerical solutions of the general relativistic hydrodynamic equations is done by High Resolution Shock Capturing schemes (HRSC), specifically designed to solve non-linear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. We use Marquina fluxes with MUSCL left and right states to solve GRH equations. First, we carry out different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations to verify the second order convergence of the code in 1D, 2 D and 3D. Second, we solve the GRH equations and use the general relativistic test problems to compare the numerical solutions with analytic ones. In order to this, we couple the flux part of general relativistic hydrodynamic equation with a source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time. The test problems examined include shock tubes, geodesic flows, and circular motion of particle around the black hole. Finally, we apply this code to the accretion disk problems around the black hole using the Schwarzschild metric at the background of the computational domain. We find spiral shocks on the accretion disk. They are observationally expected results. We also examine the star-disk interaction near a massive black hole. We find that when stars are grounded down or a hole is punched on the accretion disk, they create shock waves which destroy the accretion disk.

  9. Failure of Anisotropic Unstructured Mesh Adaption Based on Multidimensional Residual Minimization

    NASA Technical Reports Server (NTRS)

    Wood, William A.; Kleb, William L.

    2003-01-01

    An automated anisotropic unstructured mesh adaptation strategy is proposed, implemented, and assessed for the discretization of viscous flows. The adaption criteria is based upon the minimization of the residual fluctuations of a multidimensional upwind viscous flow solver. For scalar advection, this adaption strategy has been shown to use fewer grid points than gradient based adaption, naturally aligning mesh edges with discontinuities and characteristic lines. The adaption utilizes a compact stencil and is local in scope, with four fundamental operations: point insertion, point deletion, edge swapping, and nodal displacement. Evaluation of the solution-adaptive strategy is performed for a two-dimensional blunt body laminar wind tunnel case at Mach 10. The results demonstrate that the strategy suffers from a lack of robustness, particularly with regard to alignment of the bow shock in the vicinity of the stagnation streamline. In general, constraining the adaption to such a degree as to maintain robustness results in negligible improvement to the solution. Because the present method fails to consistently or significantly improve the flow solution, it is rejected in favor of simple uniform mesh refinement.

  10. Adapted morphing model for 3D volume reconstruction applied to abdominal CT images

    NASA Astrophysics Data System (ADS)

    Fadeev, Aleksey; Eltonsy, Nevine; Tourassi, Georgia; Martin, Robert; Elmaghraby, Adel

    2005-04-01

    The purpose of this study was to develop a 3D volume reconstruction model for volume rendering and apply this model to abdominal CT data. The model development includes two steps: (1) interpolation of given data for a complete 3D model, and (2) visualization. First, CT slices are interpolated using a special morphing algorithm. The main idea of this algorithm is to take a region from one CT slice and locate its most probable correspondence in the adjacent CT slice. The algorithm determines the transformation function of the region in between two adjacent CT slices and interpolates the data accordingly. The most probable correspondence of a region is obtained using correlation analysis between the given region and regions of the adjacent CT slice. By applying this technique recursively, taking progressively smaller subregions within a region, a high quality and accuracy interpolation is obtained. The main advantages of this morphing algorithm are 1) its applicability not only to parallel planes like CT slices but also to general configurations of planes in 3D space, and 2) its fully automated nature as it does not require control points to be specified by a user compared to most morphing techniques. Subsequently, to visualize data, a specialized volume rendering card (TeraRecon VolumePro 1000) was used. To represent data in 3D space, special software was developed to convert interpolated CT slices to 3D objects compatible with the VolumePro card. Visual comparison between the proposed model and linear interpolation clearly demonstrates the superiority of the proposed model.

  11. Joint Adaptive Pre-processing Resilience and Post-processing Concealment Schemes for 3D Video Transmission

    NASA Astrophysics Data System (ADS)

    El-Shafai, Walid

    2015-03-01

    3D video transmission over erroneous networks is still a considerable issue due to restricted resources and the presence of severe channel errors. Efficiently compressing 3D video with low transmission rate, while maintaining a high quality of received 3D video, is very challenging. Since it is not plausible to re-transmit all the corrupted macro-blocks (MBs) due to real time applications and limited resources. Thus it is mandatory to retrieve the lost MBs at the decoder side using sufficient post-processing schemes, such as error concealment (EC). In this paper, we propose an adaptive multi-mode EC (AMMEC) algorithm at the decoder based on utilizing pre-processing flexible macro-block ordering error resilience (FMO-ER) technique at the encoder; to efficiently conceal the erroneous MBs of intra and inter coded frames of 3D video. Experimental simulation results show that the proposed FMO-ER/AMMEC schemes can significantly improve the objective and subjective 3D video quality.

  12. First steps toward 3D high resolution imaging using adaptive optics and full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Blanco, Leonardo; Blavier, Marie; Glanc, Marie; Pouplard, Florence; Tick, Sarah; Maksimovic, Ivan; Chenegros, Guillaume; Mugnier, Laurent; Lacombe, Francois; Rousset, Gérard; Paques, Michel; Le Gargasson, Jean-François; Sahel, Jose-Alain

    2008-09-01

    We describe here two parts of our future 3D fundus camera coupling Adaptive Optics and full-field Optical Coherence Tomography. The first part is an Adaptive Optics flood imager installed at the Quinze-Vingts Hospital, regularly used on healthy and pathological eyes. A posteriori image reconstruction is performed, increasing the final image quality and field of view. The instrument lateral resolution is better than 2 microns. The second part is a full-field Optical Coherence Tomograph, which has demonstrated capability of performing a simple kind of "4 phases" image reconstruction of non biological samples and ex situ retinas. Final aim is to couple both parts in order to achieve 3D high resolution mapping of in vivo retinas.

  13. Time-accurate anisotropic mesh adaptation for three-dimensional time-dependent problems with body-fitted moving geometries

    NASA Astrophysics Data System (ADS)

    Barral, N.; Olivier, G.; Alauzet, F.

    2017-02-01

    Anisotropic metric-based mesh adaptation has proved its efficiency to reduce the CPU time of steady and unsteady simulations while improving their accuracy. However, its extension to time-dependent problems with body-fitted moving geometries is far from straightforward. This paper establishes a well-founded framework for multiscale mesh adaptation of unsteady problems with moving boundaries. This framework is based on a novel space-time analysis of the interpolation error, within the continuous mesh theory. An optimal metric field, called ALE metric field, is derived, which takes into account the movement of the mesh during the adaptation. Based on this analysis, the global fixed-point adaptation algorithm for time-dependent simulations is extended to moving boundary problems, within the range of body-fitted moving meshes and ALE simulations. Finally, three dimensional adaptive simulations with moving boundaries are presented to validate the proposed approach.

  14. Multiphase flow modelling of volcanic ash particle settling in water using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R. G.

    2013-02-01

    Small-scale experiments of volcanic ash particle settling in water have demonstrated that ash particles can either settle slowly and individually, or rapidly and collectively as a gravitationally unstable ash-laden plume. This has important implications for the emplacement of tephra deposits on the seabed. Numerical modelling has the potential to extend the results of laboratory experiments to larger scales and explore the conditions under which plumes may form and persist, but many existing models are computationally restricted by the fixed mesh approaches that they employ. In contrast, this paper presents a new multiphase flow model that uses an adaptive unstructured mesh approach. As a simulation progresses, the mesh is optimized to focus numerical resolution in areas important to the dynamics and decrease it where it is not needed, thereby potentially reducing computational requirements. Model verification is performed using the method of manufactured solutions, which shows the correct solution convergence rates. Model validation and application considers 2-D simulations of plume formation in a water tank which replicate published laboratory experiments. The numerically predicted settling velocities for both individual particles and plumes, as well as instability behaviour, agree well with experimental data and observations. Plume settling is clearly hindered by the presence of a salinity gradient, and its influence must therefore be taken into account when considering particles in bodies of saline water. Furthermore, individual particles settle in the laminar flow regime while plume settling is shown (by plume Reynolds numbers greater than unity) to be in the turbulent flow regime, which has a significant impact on entrainment and settling rates. Mesh adaptivity maintains solution accuracy while providing a substantial reduction in computational requirements when compared to the same simulation performed using a fixed mesh, highlighting the benefits of an

  15. Controlling depth of focus in 3D image reconstructions by flexible and adaptive deformation of digital holograms.

    PubMed

    Ferraro, P; Paturzo, M; Memmolo, P; Finizio, A

    2009-09-15

    We show here that through an adaptive deformation of digital holograms it is possible to manage the depth of focus in 3D imaging reconstruction. Deformation is applied to the original hologram with the aim to put simultaneously in focus, and in one reconstructed image plane, different objects lying at different distances from the hologram plane (i.e., CCD sensor). In the same way, by adapting the deformation it is possible to extend the depth of field having a tilted object entirely in focus. We demonstrate the method in both lensless as well as in microscope configuration.

  16. Adaptive Meshing of Ship Air-Wake Flowfields

    DTIC Science & Technology

    2014-10-21

    resolve gradients of the adaptation function. The third method is a meshless method that uses a physics-based force model to move nodes around to...method that uses a physics-based force model to move nodes around to resolve the geometry and flowfield. The initial phase of the research conducted...three codes all solve the unsteady Euler equations, but use different discretization strategies. The target application is an aircraft in a landing

  17. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  18. Time-dependent grid adaptation for meshes of triangles and tetrahedra

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.

    1993-01-01

    This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.

  19. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  20. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  1. Adaptive Mesh Refinement in Curvilinear Body-Fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur; Modiano, David; Colella, Phillip

    1995-01-01

    To be truly compatible with structured grids, an AMR algorithm should employ a block structure for the refined grids to allow flow solvers to take advantage of the strengths of unstructured grid systems, such as efficient solution algorithms for implicit discretizations and multigrid schemes. One such algorithm, the AMR algorithm of Berger and Colella, has been applied to and adapted for use with body-fitted structured grid systems. Results are presented for a transonic flow over a NACA0012 airfoil (AGARD-03 test case) and a reflection of a shock over a double wedge.

  2. Adaptive mesh refinement strategies in isogeometric analysis— A computational comparison

    NASA Astrophysics Data System (ADS)

    Hennig, Paul; Kästner, Markus; Morgenstern, Philipp; Peterseim, Daniel

    2017-04-01

    We explain four variants of an adaptive finite element method with cubic splines and compare their performance in simple elliptic model problems. The methods in comparison are Truncated Hierarchical B-splines with two different refinement strategies, T-splines with the refinement strategy introduced by Scott et al. in 2012, and T-splines with an alternative refinement strategy introduced by some of the authors. In four examples, including singular and non-singular problems of linear elasticity and the Poisson problem, the H1-errors of the discrete solutions, the number of degrees of freedom as well as sparsity patterns and condition numbers of the discretized problem are compared.

  3. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  4. An adaptive embedded mesh procedure for leading-edge vortex flows

    NASA Technical Reports Server (NTRS)

    Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.

    1989-01-01

    A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.

  5. Adaptive mesh compression and transmission in Internet-based interactive walkthrough virtual environments

    NASA Astrophysics Data System (ADS)

    Yang, Sheng; Kuo, C.-C. Jay

    2002-07-01

    An Internet-based interactive walkthrough virtual environment is presented in this work to facilitate interactive streaming and browsing of 3D graphic models across the Internet. The models are compressed by the view-dependent progressive mesh compression algorithm to enable the decorrelation of partitions and finer granularity. Following the fundamental framework of mesh representation, an interactive protocol based on the real time streaming protocol (RTSP) is developed to enhance the interaction between the server and the client. Finally, the data of the virtual world is re-organized and transmitted according to the viewer's requests. Experimental results demonstrate that the proposed algorithm reduces the required transmission bandwidth, and provides an acceptable visual quality even at low bit rates.

  6. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    SciTech Connect

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  7. 3-D Numerical Simulation of Hydrostatic Tests of Porous Rocks Using Adapted Constitutive Model

    NASA Astrophysics Data System (ADS)

    Chemenda, A. I.; Daniel, M.

    2014-12-01

    The high complexity and poor knowledge of the constitutive properties of porous rocks are principal obstacles for the modeling of their deformation. Normally, the constitutive lows are to be derived from the experimental data (nominal strains and stresses). They are known, however, to be sensitive to the mechanical instabilities within the rock specimen and the boundary (notably friction) conditions at its ends. To elucidate the impact of these conditions on the measured mechanical response we use 3-D finite-difference simulations of experimental tests. Modeling of hydrostatic tests was chosen because it does not typically involve deformation instabilities. The ends of the cylindrical 'rock sample' are in contact with the 'steel' elastic platens through the frictional interfaces. The whole system is subjected to a normal stress Pc applied to the external model surface. A new constitutive model of porous rocks with the cap-type yield function is used. This function is quadratic in the mean stress σm and depends on the inelastic strain γp in a way to generate strain softening at small σm and strain-hardening at high σm. The corresponding material parameters are defined from the experimental data and have clear interpretation in terms of the geometry of the yield surface. The constitutive model with this yield function and the Drucker-Prager plastic potential has been implemented in 3-D dynamic explicit code Flac3D. The results of an extensive set of numerical simulations at different model parameters will be presented. They show, in particular, that the shape of the 'numerical' hydrostats is very similar to that obtained from the experimental tests and that it is practically insensitive to the interface friction. On the other hand, the stress and strain fields within the specimen dramatically depend on this parameter. The inelastic deformation at the specimen's ends starts well before reaching the grain crushing pressure P* and evolves heterogeneously with Pc

  8. Using adaptive sampling and triangular meshes for the processing and inversion of potential field data

    NASA Astrophysics Data System (ADS)

    Foks, Nathan Leon

    The interpretation of geophysical data plays an important role in the analysis of potential field data in resource exploration industries. Two categories of interpretation techniques are discussed in this thesis; boundary detection and geophysical inversion. Fault or boundary detection is a method to interpret the locations of subsurface boundaries from measured data, while inversion is a computationally intensive method that provides 3D information about subsurface structure. My research focuses on these two aspects of interpretation techniques. First, I develop a method to aid in the interpretation of faults and boundaries from magnetic data. These processes are traditionally carried out using raster grid and image processing techniques. Instead, I use unstructured meshes of triangular facets that can extract inferred boundaries using mesh edges. Next, to address the computational issues of geophysical inversion, I develop an approach to reduce the number of data in a data set. The approach selects the data points according to a user specified proxy for its signal content. The approach is performed in the data domain and requires no modification to existing inversion codes. This technique adds to the existing suite of compressive inversion algorithms. Finally, I develop an algorithm to invert gravity data for an interfacing surface using an unstructured mesh of triangular facets. A pertinent property of unstructured meshes is their flexibility at representing oblique, or arbitrarily oriented structures. This flexibility makes unstructured meshes an ideal candidate for geometry based interface inversions. The approaches I have developed provide a suite of algorithms geared towards large-scale interpretation of potential field data, by using an unstructured representation of both the data and model parameters.

  9. A 3D finite-volume scheme for the Euler equations on adaptive tetrahedral grids

    SciTech Connect

    Vijayan, P.; Kallinderis, Y. )

    1994-08-01

    The paper describes the development and application of a new Euler solver for adaptive tetrahedral grids. Spatial discretization uses a finite-volume, node-based scheme that is of central-differencing type. A second-order Taylor series expansion is employed to march the solution in time according to the Lax-Wendroff approach. Special upwind-like smoothing operators for unstructured grids are developed for shock-capturing, as well as for suppression of solution oscillations. The scheme is formulated so that all operations are edge-based, which reduces the computational effort significantly. An adaptive grid algorithm is employed in order to resolve local flow features. This is achieved by dividing the tetrahedral cells locally, guided by a flow feature detection algorithm. Application cases include transonic flow around the ONERA M6 wing and transonic flow past a transport aircraft configuration. Comparisons with experimental data evaluate accuracy of the developed adaptive solver. 31 refs., 33 figs.

  10. Structured light 3D depth map enhancement and gesture recognition using image content adaptive filtering

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Nash, James; Atanassov, Kalin; Goma, Sergio

    2013-03-01

    A structured-light system for depth estimation is a type of 3D active sensor that consists of a structured-light projector that projects an illumination pattern on the scene (e.g. mask with vertical stripes) and a camera which captures the illuminated scene. Based on the received patterns, depths of different regions in the scene can be inferred. In this paper, we use side information in the form of image structure to enhance the depth map. This side information is obtained from the received light pattern image reflected by the scene itself. The processing steps run real time. This post-processing stage in the form of depth map enhancement can be used for better hand gesture recognition, as is illustrated in this paper.

  11. Amoeboid migration mode adaption in quasi-3D spatial density gradients of varying lattice geometry

    NASA Astrophysics Data System (ADS)

    Gorelashvili, Mari; Emmert, Martin; Hodeck, Kai F.; Heinrich, Doris

    2014-07-01

    Cell migration processes are controlled by sensitive interaction with external cues such as topographic structures of the cell’s environment. Here, we present systematically controlled assays to investigate the specific effects of spatial density and local geometry of topographic structure on amoeboid migration of Dictyostelium discoideum cells. This is realized by well-controlled fabrication of quasi-3D pillar fields exhibiting a systematic variation of inter-pillar distance and pillar lattice geometry. By time-resolved local mean-squared displacement analysis of amoeboid migration, we can extract motility parameters in order to elucidate the details of amoeboid migration mechanisms and consolidate them in a two-state contact-controlled motility model, distinguishing directed and random phases. Specifically, we find that directed pillar-to-pillar runs are found preferably in high pillar density regions, and cells in directed motion states sense pillars as attractive topographic stimuli. In contrast, cell motion in random probing states is inhibited by high pillar density, where pillars act as obstacles for cell motion. In a gradient spatial density, these mechanisms lead to topographic guidance of cells, with a general trend towards a regime of inter-pillar spacing close to the cell diameter. In locally anisotropic pillar environments, cell migration is often found to be damped due to competing attraction by different pillars in close proximity and due to lack of other potential stimuli in the vicinity of the cell. Further, we demonstrate topographic cell guidance reflecting the lattice geometry of the quasi-3D environment by distinct preferences in migration direction. Our findings allow to specifically control amoeboid cell migration by purely topographic effects and thus, to induce active cell guidance. These tools hold prospects for medical applications like improved wound treatment, or invasion assays for immune cells.

  12. Accessible bioprinting: adaptation of a low-cost 3D-printer for precise cell placement and stem cell differentiation.

    PubMed

    Reid, John A; Mollica, Peter A; Johnson, Garett D; Ogle, Roy C; Bruno, Robert D; Sachs, Patrick C

    2016-06-07

    The precision and repeatability offered by computer-aided design and computer-numerically controlled techniques in biofabrication processes is quickly becoming an industry standard. However, many hurdles still exist before these techniques can be used in research laboratories for cellular and molecular biology applications. Extrusion-based bioprinting systems have been characterized by high development costs, injector clogging, difficulty achieving small cell number deposits, decreased cell viability, and altered cell function post-printing. To circumvent the high-price barrier to entry of conventional bioprinters, we designed and 3D printed components for the adaptation of an inexpensive 'off-the-shelf' commercially available 3D printer. We also demonstrate via goal based computer simulations that the needle geometries of conventional commercially standardized, 'luer-lock' syringe-needle systems cause many of the issues plaguing conventional bioprinters. To address these performance limitations we optimized flow within several microneedle geometries, which revealed a short tapered injector design with minimal cylindrical needle length was ideal to minimize cell strain and accretion. We then experimentally quantified these geometries using pulled glass microcapillary pipettes and our modified, low-cost 3D printer. This systems performance validated our models exhibiting: reduced clogging, single cell print resolution, and maintenance of cell viability without the use of a sacrificial vehicle. Using this system we show the successful printing of human induced pluripotent stem cells (hiPSCs) into Geltrex and note their retention of a pluripotent state 7 d post printing. We also show embryoid body differentiation of hiPSC by injection into differentiation conducive environments, wherein we observed continuous growth, emergence of various evaginations, and post-printing gene expression indicative of the presence of all three germ layers. These data demonstrate an

  13. Mesh adaption for efficient multiscale implementation of one-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Lignell, D. O.; Kerstein, A. R.; Sun, G.; Monson, E. I.

    2013-06-01

    One-Dimensional Turbulence (ODT) is a stochastic model for turbulent flow simulation. In an atmospheric context, it is analogous to single-column modeling (SCM) in that it lives on a 1D spatial domain, but different in that it time advances individual flow realizations rather than ensemble-averaged quantities. The lack of averaging enables a physically sound multiscale treatment, which is useful for resolving sporadic localized phenomena, as seen in stably stratified regimes, and sharp interfaces, as observed where a convective layer encounters a stable overlying zone. In such flows, the relevant scale range is so large that it is beneficial to enhance model performance by introducing an adaptive mesh. An adaptive-mesh algorithm that provides the desired performance characteristics is described and demonstrated, and its implications for the ODT advancement scheme are explained.

  14. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  15. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  16. TRIM: A finite-volume MHD algorithm for an unstructured adaptive mesh

    SciTech Connect

    Schnack, D.D.; Lottati, I.; Mikic, Z.

    1995-07-01

    The authors describe TRIM, a MHD code which uses finite volume discretization of the MHD equations on an unstructured adaptive grid of triangles in the poloidal plane. They apply it to problems related to modeling tokamak toroidal plasmas. The toroidal direction is treated by a pseudospectral method. Care was taken to center variables appropriately on the mesh and to construct a self adjoint diffusion operator for cell centered variables.

  17. Adaptive mesh refinement for time-domain electromagnetics using vector finite elements :a feasibility study.

    SciTech Connect

    Turner, C. David; Kotulski, Joseph Daniel; Pasik, Michael Francis

    2005-12-01

    This report investigates the feasibility of applying Adaptive Mesh Refinement (AMR) techniques to a vector finite element formulation for the wave equation in three dimensions. Possible error estimators are considered first. Next, approaches for refining tetrahedral elements are reviewed. AMR capabilities within the Nevada framework are then evaluated. We summarize our conclusions on the feasibility of AMR for time-domain vector finite elements and identify a path forward.

  18. Adaptive laser beam forming for laser shock micro-forming for 3D MEMS devices fabrication

    NASA Astrophysics Data System (ADS)

    Zou, Ran; Wang, Shuliang; Wang, Mohan; Li, Shuo; Huang, Sheng; Lin, Yankun; Chen, Kevin P.

    2016-07-01

    Laser shock micro-forming is a non-thermal laser forming method that use laser-induced shockwave to modify surface properties and to adjust shapes and geometry of work pieces. In this paper, we present an adaptive optical technique to engineer spatial profiles of the laser beam to exert precision control on the laser shock forming process for free-standing MEMS structures. Using a spatial light modulator, on-target laser energy profiles are engineered to control shape, size, and deformation magnitude, which has led to significant improvement of the laser shock processing outcome at micrometer scales. The results presented in this paper show that the adaptive-optics laser beam forming is an effective method to improve both quality and throughput of the laser forming process at micrometer scales.

  19. Modeling gravitational instabilities in self-gravitating protoplanetary disks with adaptive mesh refinement techniques

    NASA Astrophysics Data System (ADS)

    Lichtenberg, Tim; Schleicher, Dominik R. G.

    2015-07-01

    The astonishing diversity in the observed planetary population requires theoretical efforts and advances in planet formation theories. The use of numerical approaches provides a method to tackle the weaknesses of current models and is an important tool to close gaps in poorly constrained areas such as the rapid formation of giant planets in highly evolved systems. So far, most numerical approaches make use of Lagrangian-based smoothed-particle hydrodynamics techniques or grid-based 2D axisymmetric simulations. We present a new global disk setup to model the first stages of giant planet formation via gravitational instabilities (GI) in 3D with the block-structured adaptive mesh refinement (AMR) hydrodynamics code enzo. With this setup, we explore the potential impact of AMR techniques on the fragmentation and clumping due to large-scale instabilities using different AMR configurations. Additionally, we seek to derive general resolution criteria for global simulations of self-gravitating disks of variable extent. We run a grid of simulations with varying AMR settings, including runs with a static grid for comparison. Additionally, we study the effects of varying the disk radius. The physical settings involve disks with Rdisk = 10,100 and 300 AU, with a mass of Mdisk ≈ 0.05 M⊙ and a central object of subsolar mass (M⋆ = 0.646 M⊙). To validate our thermodynamical approach we include a set of simulations with a dynamically stable profile (Qinit = 3) and similar grid parameters. The development of fragmentation and the buildup of distinct clumps in the disk is strongly dependent on the chosen AMR grid settings. By combining our findings from the resolution and parameter studies we find a general lower limit criterion to be able to resolve GI induced fragmentation features and distinct clumps, which induce turbulence in the disk and seed giant planet formation. Irrespective of the physical extension of the disk, topologically disconnected clump features are only

  20. ADER-WENO finite volume schemes with space-time adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Zanotti, Olindo; Hidalgo, Arturo; Balsara, Dinshaw S.

    2013-09-01

    We present the first high order one-step ADER-WENO finite volume scheme with adaptive mesh refinement (AMR) in multiple space dimensions. High order spatial accuracy is obtained through a WENO reconstruction, while a high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method. Due to the one-step nature of the underlying scheme, the resulting algorithm is particularly well suited for an AMR strategy on space-time adaptive meshes, i.e. with time-accurate local time stepping. The AMR property has been implemented 'cell-by-cell', with a standard tree-type algorithm, while the scheme has been parallelized via the message passing interface (MPI) paradigm. The new scheme has been tested over a wide range of examples for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations of compressible gas dynamics and the equations of magnetohydrodynamics (MHD). High order in space and time have been confirmed via a numerical convergence study and a detailed analysis of the computational speed-up with respect to highly refined uniform meshes is also presented. We also show test problems where the presented high order AMR scheme behaves clearly better than traditional second order AMR methods. The proposed scheme that combines for the first time high order ADER methods with space-time adaptive grids in two and three space dimensions is likely to become a useful tool in several fields of computational physics, applied mathematics and mechanics.

  1. 3D positional control of magnetic levitation system using adaptive control: improvement of positioning control in horizontal plane

    NASA Astrophysics Data System (ADS)

    Nishino, Toshimasa; Fujitani, Yasuhiro; Kato, Norihiko; Tsuda, Naoaki; Nomura, Yoshihiko; Matsui, Hirokazu

    2012-01-01

    The objective of this paper is to establish a technique that levitates and conveys a hand, a kind of micro-robot, by applying magnetic forces: the hand is assumed to have a function of holding and detaching the objects. The equipment to be used in our experiments consists of four pole-pieces of electromagnets, and is expected to work as a 4DOF drive unit within some restricted range of 3D space: the three DOF are corresponding to 3D positional control and the remaining one DOF, rotational oscillation damping control. Having used the same equipment, Khamesee et al. had manipulated the impressed voltages on the four electric magnetics by a PID controller by the use of the feedback signal of the hand's 3D position, the controlled variable. However, in this system, there were some problems remaining: in the horizontal direction, when translating the hand out of restricted region, positional control performance was suddenly degraded. The authors propose a method to apply an adaptive control to the horizontal directional control. It is expected that the technique to be presented in this paper contributes not only to the improvement of the response characteristic but also to widening the applicable range in the horizontal directional control.

  2. Repercussion of geometric and dynamic constraints on the 3D rendering quality in structurally adaptive multi-view shooting systems

    NASA Astrophysics Data System (ADS)

    Ali-Bey, Mohamed; Moughamir, Saïd; Manamanni, Noureddine

    2011-12-01

    in this paper a simulator of a multi-view shooting system with parallel optical axes and structurally variable configuration is proposed. The considered system is dedicated to the production of 3D contents for auto-stereoscopic visualization. The global shooting/viewing geometrical process, which is the kernel of this shooting system, is detailed and the different viewing, transformation and capture parameters are then defined. An appropriate perspective projection model is afterward derived to work out a simulator. At first, this latter is used to validate the global geometrical process in the case of a static configuration. Next, the simulator is used to show the limitations of a static configuration of this shooting system type by considering the case of dynamic scenes and then a dynamic scheme is achieved to allow a correct capture of this kind of scenes. After that, the effect of the different geometrical capture parameters on the 3D rendering quality and the necessity or not of their adaptation is studied. Finally, some dynamic effects and their repercussions on the 3D rendering quality of dynamic scenes are analyzed using error images and some image quantization tools. Simulation and experimental results are presented throughout this paper to illustrate the different studied points. Some conclusions and perspectives end the paper. [Figure not available: see fulltext.

  3. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  4. Fluidity: a fully-unstructured adaptive mesh computational framework for geodynamics

    NASA Astrophysics Data System (ADS)

    Kramer, S. C.; Davies, D.; Wilson, C. R.

    2010-12-01

    Fluidity is a finite element, finite volume fluid dynamics model developed by the Applied Modelling and Computation Group at Imperial College London. Several features of the model make it attractive for use in geodynamics. A core finite element library enables the rapid implementation and investigation of new numerical schemes. For example, the function spaces used for each variable can be changed allowing properties of the discretisation, such as stability, conservation and balance, to be easily varied and investigated. Furthermore, unstructured, simplex meshes allow the underlying resolution to vary rapidly across the computational domain. Combined with dynamic mesh adaptivity, where the mesh is periodically optimised to the current conditions, this allows significant savings in computational cost over traditional chessboard-like structured mesh simulations [1]. In this study we extend Fluidity (using the Portable, Extensible Toolkit for Scientific Computation [PETSc, 2]) to Stokes flow problems relevant to geodynamics. However, due to the assumptions inherent in all models, it is necessary to properly verify and validate the code before applying it to any large-scale problems. In recent years this has been made easier by the publication of a series of ‘community benchmarks’ for geodynamic modelling. We discuss the use of several of these to help validate Fluidity [e.g. 3, 4]. The experimental results of Vatteville et al. [5] are then used to validate Fluidity against laboratory measurements. This test case is also used to highlight the computational advantages of using adaptive, unstructured meshes - significantly reducing the number of nodes and total CPU time required to match a fixed mesh simulation. References: 1. C. C. Pain et al. Comput. Meth. Appl. M, 190:3771-3796, 2001. doi:10.1016/S0045-7825(00)00294-2. 2. B. Satish et al. http://www.mcs.anl.gov/petsc/petsc-2/, 2001. 3. Blankenbach et al. Geophys. J. Int., 98:23-28, 1989. 4. Busse et al. Geophys

  5. Quantitative Evaluation of Tissue Surface Adaption of CAD-Designed and 3D Printed Wax Pattern of Maxillary Complete Denture

    PubMed Central

    Chen, Hu; Wang, Han; Lv, Peijun; Wang, Yong; Sun, Yuchun

    2015-01-01

    Objective. To quantitatively evaluate the tissue surface adaption of a maxillary complete denture wax pattern produced by CAD and 3DP. Methods. A standard edentulous maxilla plaster cast model was used, for which a wax pattern of complete denture was designed using CAD software developed in our previous study and printed using a 3D wax printer, while another wax pattern was manufactured by the traditional manual method. The cast model and the two wax patterns were scanned in the 3D scanner as “DataModel,” “DataWaxRP,” and “DataWaxManual.” After setting each wax pattern on the plaster cast, the whole model was scanned for registration. After registration, the deviations of tissue surface between “DataModel” and “DataWaxRP” and between “DataModel” and “DataWaxManual” were measured. The data was analyzed by paired t-test. Results. For both wax patterns produced by the CAD&RP method and the manual method, scanning data of tissue surface and cast surface showed a good fit in the majority. No statistically significant (P > 0.05) difference was observed between the CAD&RP method and the manual method. Conclusions. Wax pattern of maxillary complete denture produced by the CAD&3DP method is comparable with traditional manual method in the adaption to the edentulous cast model. PMID:26583108

  6. Patched based methods for adaptive mesh refinement solutions of partial differential equations

    SciTech Connect

    Saltzman, J.

    1997-09-02

    This manuscript contains the lecture notes for a course taught from July 7th through July 11th at the 1997 Numerical Analysis Summer School sponsored by C.E.A., I.N.R.I.A., and E.D.F. The subject area was chosen to support the general theme of that year`s school which is ``Multiscale Methods and Wavelets in Numerical Simulation.`` The first topic covered in these notes is a description of the problem domain. This coverage is limited to classical PDEs with a heavier emphasis on hyperbolic systems and constrained hyperbolic systems. The next topic is difference schemes. These schemes are the foundation for the adaptive methods. After the background material is covered, attention is focused on a simple patched based adaptive algorithm and its associated data structures for square grids and hyperbolic conservation laws. Embellishments include curvilinear meshes, embedded boundary and overset meshes. Next, several strategies for parallel implementations are examined. The remainder of the notes contains descriptions of elliptic solutions on the mesh hierarchy, elliptically constrained flow solution methods and elliptically constrained flow solution methods with diffusion.

  7. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  8. Data-adapted moving least squares method for 3-D image interpolation

    NASA Astrophysics Data System (ADS)

    Jang, Sumi; Nam, Haewon; Lee, Yeon Ju; Jeong, Byeongseon; Lee, Rena; Yoon, Jungho

    2013-12-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons.

  9. HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.

    PubMed

    Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R

    2007-08-01

    We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic

  10. NOTE: Adaptation of a 3D prostate cancer atlas for transrectal ultrasound guided target-specific biopsy

    NASA Astrophysics Data System (ADS)

    Narayanan, R.; Werahera, P. N.; Barqawi, A.; Crawford, E. D.; Shinohara, K.; Simoneau, A. R.; Suri, J. S.

    2008-10-01

    when TRUS guided biopsies are assisted by the 3D prostate cancer atlas compared to the current standard of care. The fast registration algorithm we have developed can easily be adapted for clinical applications for the improved diagnosis of prostate cancer.

  11. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  12. Adaptive Mesh Euler Equation Computation of Vortex Breakdown in Delta Wing Flow.

    NASA Astrophysics Data System (ADS)

    Modiano, David Laurence

    A solution method for the three-dimensional Euler equations is formulated and implemented. The solver uses an unstructured mesh of tetrahedral cells and performs adaptive refinement by mesh-point embedding to increase mesh resolution in regions of interesting flow features. The fourth-difference artificial dissipation is increased to a higher order of accuracy using the method of Holmes and Connell. A new method of temporal integration is developed to accelerate the explicit computation of unsteady flows. The solver is applied to the solution of the flow around a sharp edged delta wing, with emphasis on the behavior of the leading edge vortex above the leeside of the wing at high angle of attack, under which conditions the vortex suffers from vortex breakdown. Large deviations in entropy, which indicate vortical regions of the flow, specify the region in which adaptation is performed. Adaptive flow calculations are performed at ten different angles of attack, at seven of which vortex breakdown occurs. The aerodynamic normal force coefficients show excellent agreement with wind tunnel data measured by Jarrah, which demonstrates the importance of adaptation in obtaining an accurate solution. The pitching moment coefficient and the location of vortex breakdown are compared with experimental data measured by Hummel and Srinivasan, with which fairly good agreement is seen in cases in which the location of breakdown is over the wing. A series of unsteady calculations involving a pitching delta wing were performed. The use of the acceleration technique is validated. A hysteresis in the normal force is observed, as in experiments, and a lag in the breakdown position is demonstrated. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617 -253-5668; Fax 617-253-1690.).

  13. A Feature-adaptive Subdivision Method for Real-time 3D Reconstruction of Repeated Topology Surfaces

    NASA Astrophysics Data System (ADS)

    Lin, Jinhua; Wang, Yanjie; Sun, Honghai

    2017-03-01

    It's well known that rendering for a large number of triangles with GPU hardware tessellation has made great progress. However, due to the fixed nature of GPU pipeline, many off-line methods that perform well can not meet the on-line requirements. In this paper, an optimized Feature-adaptive subdivision method is proposed, which is more suitable for reconstructing surfaces with repeated cusps or creases. An Octree primitive is established in irregular regions where there are the same sharp vertices or creases, this method can find the neighbor geometry information quickly. Because of having the same topology structure between Octree primitive and feature region, the Octree feature points can match the arbitrary vertices in feature region more precisely. In the meanwhile, the patches is re-encoded in the Octree primitive by using the breadth-first strategy, resulting in a meta-table which allows for real-time reconstruction by GPU hardware tessellation unit. There is only one feature region needed to be calculated under Octree primitive, other regions with the same repeated feature generate their own meta-table directly, the reconstruction time is saved greatly for this step. With regard to the meshes having a large number of repeated topology feature, our algorithm improves the subdivision time by 17.575% and increases the average frame drawing time by 0.2373 ms compared to the traditional FAS (Feature-adaptive Subdivision), at the same time the model can be reconstructed in a watertight manner.

  14. Parametric 3D Atmospheric Reconstruction in Highly Variable Terrain with Recycled Monte Carlo Paths and an Adapted Bayesian Inference Engine

    NASA Technical Reports Server (NTRS)

    Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.

    2012-01-01

    We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.

  15. Fully-3D PET image reconstruction using scanner-independent, adaptive projection data and highly rotation-symmetric voxel assemblies.

    PubMed

    Scheins, J J; Herzog, H; Shah, N J

    2011-03-01

    For iterative, fully 3D positron emission tomography (PET) image reconstruction intrinsic symmetries can be used to significantly reduce the size of the system matrix. The precalculation and beneficial memory-resident storage of all nonzero system matrix elements is possible where sufficient compression exists. Thus, reconstruction times can be minimized independently of the used projector and more elaborate weighting schemes, e.g., volume-of-intersection (VOI), are applicable. A novel organization of scanner-independent, adaptive 3D projection data is presented which can be advantageously combined with highly rotation-symmetric voxel assemblies. In this way, significant system matrix compression is achieved. Applications taking into account all physical lines-of-response (LORs) with individual VOI projectors are presented for the Siemens ECAT HR+ whole-body scanner and the Siemens BrainPET, the PET component of a novel hybrid-MR/PET imaging system. Measured and simulated data were reconstructed using the new method with ordered-subset-expectation-maximization (OSEM). Results are compared to those obtained by the sinogram-based OSEM reconstruction provided by the manufacturer. The higher computational effort due to the more accurate image space sampling provides significantly improved images in terms of resolution and noise.

  16. Fast 3-D large-scale gravity and magnetic modeling using unstructured grids and an adaptive multilevel fast multipole method

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi

    2017-01-01

    A novel fast and accurate algorithm is developed for large-scale 3-D gravity and magnetic modeling problems. An unstructured grid discretization is used to approximate sources with arbitrary mass and magnetization distributions. A novel adaptive multilevel fast multipole (AMFM) method is developed to reduce the modeling time. An observation octree is constructed on a set of arbitrarily distributed observation sites, while a source octree is constructed on a source tetrahedral grid. A novel characteristic is the independence between the observation octree and the source octree, which simplifies the implementation of different survey configurations such as airborne and ground surveys. Two synthetic models, a cubic model and a half-space model with mountain-valley topography, are tested. As compared to analytical solutions of gravity and magnetic signals, excellent agreements of the solutions verify the accuracy of our AMFM algorithm. Finally, our AMFM method is used to calculate the terrain effect on an airborne gravity data set for a realistic topography model represented by a triangular surface retrieved from a digital elevation model. Using 16 threads, more than 5800 billion interactions between 1,002,001 observation points and 5,839,830 tetrahedral elements are computed in 453.6 s. A traditional first-order Gaussian quadrature approach requires 3.77 days. Hence, our new AMFM algorithm not only can quickly compute the gravity and magnetic signals for complicated problems but also can substantially accelerate the solution of 3-D inversion problems.

  17. Adaptive-mesh-based algorithm for fluorescence molecular tomography using an analytical solution

    NASA Astrophysics Data System (ADS)

    Wang, Daifa; Song, Xiaolei; Bai, Jing

    2007-07-01

    Fluorescence molecular tomography (FMT) has become an important method for in-vivo imaging of small animals. It has been widely used for tumor genesis, cancer detection, metastasis, drug discovery, and gene therapy. In this study, an algorithm for FMT is proposed to obtain accurate and fast reconstruction by combining an adaptive mesh refinement technique and an analytical solution of diffusion equation. Numerical studies have been performed on a parallel plate FMT system with matching fluid. The reconstructions obtained show that the algorithm is efficient in computation time, and they also maintain image quality.

  18. Compact integration factor methods for complex domains and adaptive mesh refinement.

    PubMed

    Liu, Xinfeng; Nie, Qing

    2010-08-10

    Implicit integration factor (IIF) method, a class of efficient semi-implicit temporal scheme, was introduced recently for stiff reaction-diffusion equations. To reduce cost of IIF, compact implicit integration factor (cIIF) method was later developed for efficient storage and calculation of exponential matrices associated with the diffusion operators in two and three spatial dimensions for Cartesian coordinates with regular meshes. Unlike IIF, cIIF cannot be directly extended to other curvilinear coordinates, such as polar and spherical coordinate, due to the compact representation for the diffusion terms in cIIF. In this paper, we present a method to generalize cIIF for other curvilinear coordinates through examples of polar and spherical coordinates. The new cIIF method in polar and spherical coordinates has similar computational efficiency and stability properties as the cIIF in Cartesian coordinate. In addition, we present a method for integrating cIIF with adaptive mesh refinement (AMR) to take advantage of the excellent stability condition for cIIF. Because the second order cIIF is unconditionally stable, it allows large time steps for AMR, unlike a typical explicit temporal scheme whose time step is severely restricted by the smallest mesh size in the entire spatial domain. Finally, we apply those methods to simulating a cell signaling system described by a system of stiff reaction-diffusion equations in both two and three spatial dimensions using AMR, curvilinear and Cartesian coordinates. Excellent performance of the new methods is observed.

  19. A Block-Structured Adaptive Mesh Refinement Technique with a Finite-Difference-Based Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2013-11-01

    A novel adaptive mesh refinement (AMR) algorithm for the numerical solution of fluid flow problems is presented in this study. The proposed AMR algorithm can be used to solve partial differential equations including, but not limited to, the Navier-Stokes equations using an AMR technique. Here, the lattice Boltzmann method (LBM) is employed as a substitute of the nearly incompressible Navier-Stokes equations. Besides its simplicity, the proposed AMR algorithm is straightforward and yet efficient. The idea is to remove the need for a tree-type data structure by using the pointer attributes in a unique way, along with an appropriate adjustment of the child block's IDs, to determine the neighbors of a certain block. Thanks to the unique way of invoking pointers, there is no need to construct a quad-tree (in 2D) or oct-tree (in 3D) data structure for maintaining the connectivity data between different blocks. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with a clean and efficient algorithm that is easier to implement and use on parallel machines. Several benchmark studies are carried out to assess the accuracy and efficiency of the proposed AMR-LBM, including lid-driven cavity flow, vortex shedding past a square cylinder, and Kelvin-Helmholtz instability for single-phase and multiphase fluids.

  20. A fully-automatic locally adaptive thresholding algorithm for blood vessel segmentation in 3D digital subtraction angiography.

    PubMed

    Boegel, Marco; Hoelter, Philip; Redel, Thomas; Maier, Andreas; Hornegger, Joachim; Doerfler, Arnd

    2015-01-01

    Subarachnoid hemorrhage due to a ruptured cerebral aneurysm is still a devastating disease. Planning of endovascular aneurysm therapy is increasingly based on hemodynamic simulations necessitating reliable vessel segmentation and accurate assessment of vessel diameters. In this work, we propose a fully-automatic, locally adaptive, gradient-based thresholding algorithm. Our approach consists of two steps. First, we estimate the parameters of a global thresholding algorithm using an iterative process. Then, a locally adaptive version of the approach is applied using the estimated parameters. We evaluated both methods on 8 clinical 3D DSA cases. Additionally, we propose a way to select a reference segmentation based on 2D DSA measurements. For large vessels such as the internal carotid artery, our results show very high sensitivity (97.4%), precision (98.7%) and Dice-coefficient (98.0%) with our reference segmentation. Similar results (sensitivity: 95.7%, precision: 88.9% and Dice-coefficient: 90.7%) are achieved for smaller vessels of approximately 1mm diameter.

  1. Polycaprolactone fiber meshes provide a 3D environment suitable for cultivation and differentiation of melanocytes from the outer root sheath of hair follicle.

    PubMed

    Savkovic, Vuk; Flämig, Franziska; Schneider, Marie; Sülflow, Katharina; Loth, Tina; Lohrenz, Andrea; Hacker, Michael Christian; Schulz-Siegmund, Michaela; Simon, Jan-Christoph

    2016-01-01

    Melanocytes differentiated from the stem cells of human hair follicle outer root sheath (ORS) have the potential for developing non-invasive treatments for skin disorders out of a minimal sample: of hair root. With a robust procedure for melanocyte cultivation from the ORS of human hair follicle at hand, this study focused on the identification of a suitable biocompatible, biodegradable carrier as the next step toward their clinical implementation. Polycaprolactone (PCL) is a known biocompatible material used for a number of medical devices. In this study, we have populated electrospun PCL fiber meshes with normal human epidermal melanocytes (NHEM) as well as with hair-follicle-derived human melanocytes from the outer root sheath (HUMORS) and tested their functionality in vitro. PCL fiber meshes evidently provided a niche for melanocytes and supported their melanotic properties. The cells were tested for gene expression of PAX3, PMEL, TYR and MITF, as well as for proliferation, expression of melanocyte marker proteins tyrosinase and glycoprotein 100 (gp100), L-DOPA-tautomerase enzymatic activity and melanin content. Reduced mitochondrial activity and PAX-3 gene expression indicated that the three-dimensional PCL scaffold supported differentiation rather than proliferation of melanocytes. The monitored melanotic features of both the NHEM and HUMORS cultivated on PCL scaffolds significantly exceeded those of two-dimensional adherent cultures.

  2. Adaptive Mesh Expansion Model (AMEM) for Liver Segmentation from CT Image

    PubMed Central

    Wang, Xuehu; Yang, Jian; Ai, Danni; Zheng, Yongchang; Tang, Songyuan; Wang, Yongtian

    2015-01-01

    This study proposes a novel adaptive mesh expansion model (AMEM) for liver segmentation from computed tomography images. The virtual deformable simplex model (DSM) is introduced to represent the mesh, in which the motion of each vertex can be easily manipulated. The balloon, edge, and gradient forces are combined with the binary image to construct the external force of the deformable model, which can rapidly drive the DSM to approach the target liver boundaries. Moreover, tangential and normal forces are combined with the gradient image to control the internal force, such that the DSM degree of smoothness can be precisely controlled. The triangular facet of the DSM is adaptively decomposed into smaller triangular components, which can significantly improve the segmentation accuracy of the irregularly sharp corners of the liver. The proposed method is evaluated on the basis of different criteria applied to 10 clinical data sets. Experiments demonstrate that the proposed AMEM algorithm is effective and robust and thus outperforms six other up-to-date algorithms. Moreover, AMEM can achieve a mean overlap error of 6.8% and a mean volume difference of 2.7%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 1.3 mm and 2.7 mm, respectively. PMID:25769030

  3. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  4. AMRSim: an object-oriented performance simulator for parallel adaptive mesh refinement

    SciTech Connect

    Miller, B; Philip, B; Quinlan, D; Wissink, A

    2001-01-08

    Adaptive mesh refinement is complicated by both the algorithms and the dynamic nature of the computations. In parallel the complexity of getting good performance is dependent upon the architecture and the application. Most attempts to address the complexity of AMR have lead to the development of library solutions, most have developed object-oriented libraries or frameworks. All attempts to date have made numerous and sometimes conflicting assumptions which make the evaluation of performance of AMR across different applications and architectures difficult or impracticable. The evaluation of different approaches can alternatively be accomplished through simulation of the different AMR processes. In this paper we outline our research work to simulate the processing of adaptive mesh refinement grids using a distributed array class library (P++). This paper presents a combined analytic and empirical approach, since details of the algorithms can be readily predicted (separated into specific phases), while the performance associated with the dynamic behavior must be studied empirically. The result, AMRSim, provides a simple way to develop bounds on the expected performance of AMR calculations subject to constraints given by the algorithms, frameworks, and architecture.

  5. Parallel grid library with adaptive mesh refinement for development of highly scalable simulations

    NASA Astrophysics Data System (ADS)

    Honkonen, I.; von Alfthan, S.; Sandroos, A.; Janhunen, P.; Palmroth, M.

    2012-04-01

    As the single CPU core performance is saturating while the number of cores in the fastest supercomputers increases exponentially, the parallel performance of simulations on distributed memory machines is crucial. At the same time, utilizing efficiently the large number of available cores presents a challenge, especially in simulations with run-time adaptive mesh refinement. We have developed a generic grid library (dccrg) aimed at finite volume simulations that is easy to use and scales well up to tens of thousands of cores. The grid has several attractive features: It 1) allows an arbitrary C++ class or structure to be used as cell data; 2) provides a simple interface for adaptive mesh refinement during a simulation; 3) encapsulates the details of MPI communication when updating the data of neighboring cells between processes; and 4) provides a simple interface to run-time load balancing, e.g. domain decomposition, through the Zoltan library. Dccrg is freely available for anyone to use, study and modify under the GNU Lesser General Public License v3. We will present the implementation of dccrg, simple and advanced usage examples and scalability results on various supercomputers and problems.

  6. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  7. Numerical study of three-dimensional liquid jet breakup with adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Xie, Zhihua; Pavlidis, Dimitrios; Salinas, Pablo; Pain, Christopher; Matar, Omar

    2016-11-01

    Liquid jet breakup is an important fundamental multiphase flow, often found in many industrial engineering applications. The breakup process is very complex, involving jets, liquid films, ligaments, and small droplets, featuring tremendous complexity in interfacial topology and a large range of spatial scales. The objective of this study is to investigate the fluid dynamics of three-dimensional liquid jet breakup problems, such as liquid jet primary breakup and gas-sheared liquid jet breakup. An adaptive unstructured mesh modelling framework is employed here, which can modify and adapt unstructured meshes to optimally represent the underlying physics of multiphase problems and reduce computational effort without sacrificing accuracy. The numerical framework consists of a mixed control volume and finite element formulation, a 'volume of fluid' type method for the interface capturing based on a compressive control volume advection method and second-order finite element methods, and a force-balanced algorithm for the surface tension implementation. Numerical examples of some benchmark tests and the dynamics of liquid jet breakup with and without ambient gas are presented to demonstrate the capability of this method.

  8. Transmission mode adaptive beamforming for planar phased arrays and its application to 3D ultrasonic transcranial imaging

    NASA Astrophysics Data System (ADS)

    Shapoori, Kiyanoosh; Sadler, Jeffrey; Wydra, Adrian; Malyarenko, Eugene; Sinclair, Anthony; Maev, Roman G.

    2013-03-01

    A new adaptive beamforming method for accurately focusing ultrasound behind highly scattering layers of human skull and its application to 3D transcranial imaging via small-aperture planar phased arrays are reported. Due to its undulating, inhomogeneous, porous, and highly attenuative structure, human skull bone severely distorts ultrasonic beams produced by conventional focusing methods in both imaging and therapeutic applications. Strong acoustical mismatch between the skull and brain tissues, in addition to the skull's undulating topology across the active area of a planar ultrasonic probe, could cause multiple reflections and unpredictable refraction during beamforming and imaging processes. Such effects could significantly deflect the probe's beam from the intended focal point. Presented here is a theoretical basis and simulation results of an adaptive beamforming method that compensates for the latter effects in transmission mode, accompanied by experimental verification. The probe is a custom-designed 2 MHz, 256-element matrix array with 0.45 mm element size and 0.1mm kerf. Through its small footprint, it is possible to accurately measure the profile of the skull segment in contact with the probe and feed the results into our ray tracing program. The latter calculates the new time delay patterns adapted to the geometrical and acoustical properties of the skull phantom segment in contact with the probe. The time delay patterns correct for the refraction at the skull-brain boundary and bring the distorted beam back to its intended focus. The algorithms were implemented on the ultrasound open-platform ULA-OP (developed at the University of Florence).

  9. Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

    NASA Astrophysics Data System (ADS)

    Wood, William Alfred, III

    production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

  10. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System.

    PubMed

    Ying, Wenjun; Henriquez, Craig S

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented.

  11. Adaptive Mesh Refinement and Adaptive Time Integration for Electrical Wave Propagation on the Purkinje System

    PubMed Central

    Ying, Wenjun; Henriquez, Craig S.

    2015-01-01

    A both space and time adaptive algorithm is presented for simulating electrical wave propagation in the Purkinje system of the heart. The equations governing the distribution of electric potential over the system are solved in time with the method of lines. At each timestep, by an operator splitting technique, the space-dependent but linear diffusion part and the nonlinear but space-independent reactions part in the partial differential equations are integrated separately with implicit schemes, which have better stability and allow larger timesteps than explicit ones. The linear diffusion equation on each edge of the system is spatially discretized with the continuous piecewise linear finite element method. The adaptive algorithm can automatically recognize when and where the electrical wave starts to leave or enter the computational domain due to external current/voltage stimulation, self-excitation, or local change of membrane properties. Numerical examples demonstrating efficiency and accuracy of the adaptive algorithm are presented. PMID:26581455

  12. Adaptive unstructured triangular mesh generation and flow solvers for the Navier-Stokes equations at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Ashford, Gregory A.; Powell, Kenneth G.

    1995-01-01

    A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.

  13. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  14. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  15. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique.

    PubMed

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  16. Finite-difference lattice Boltzmann method with a block-structured adaptive-mesh-refinement technique

    NASA Astrophysics Data System (ADS)

    Fakhari, Abbas; Lee, Taehun

    2014-03-01

    An adaptive-mesh-refinement (AMR) algorithm for the finite-difference lattice Boltzmann method (FDLBM) is presented in this study. The idea behind the proposed AMR is to remove the need for a tree-type data structure. Instead, pointer attributes are used to determine the neighbors of a certain block via appropriate adjustment of its children identifications. As a result, the memory and time required for tree traversal are completely eliminated, leaving us with an efficient algorithm that is easier to implement and use on parallel machines. To allow different mesh sizes at separate parts of the computational domain, the Eulerian formulation of the streaming process is invoked. As a result, there is no need for rescaling the distribution functions or using a temporal interpolation at the fine-coarse grid boundaries. The accuracy and efficiency of the proposed FDLBM AMR are extensively assessed by investigating a variety of vorticity-dominated flow fields, including Taylor-Green vortex flow, lid-driven cavity flow, thin shear layer flow, and the flow past a square cylinder.

  17. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, λc=800nm, ▵λ=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with λc=809nm and ▵λ=81nm (2.6 μm nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 μm root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44

  18. Detached Eddy Simulation of the UH-60 Rotor Wake Using Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.; Ahmad, Jasim U.

    2012-01-01

    Time-dependent Navier-Stokes flow simulations have been carried out for a UH-60 rotor with simplified hub in forward flight and hover flight conditions. Flexible rotor blades and flight trim conditions are modeled and established by loosely coupling the OVERFLOW Computational Fluid Dynamics (CFD) code with the CAMRAD II helicopter comprehensive code. High order spatial differences, Adaptive Mesh Refinement (AMR), and Detached Eddy Simulation (DES) are used to obtain highly resolved vortex wakes, where the largest turbulent structures are captured. Special attention is directed towards ensuring the dual time accuracy is within the asymptotic range, and verifying the loose coupling convergence process using AMR. The AMR/DES simulation produced vortical worms for forward flight and hover conditions, similar to previous results obtained for the TRAM rotor in hover. AMR proved to be an efficient means to capture a rotor wake without a priori knowledge of the wake shape.

  19. On the Computation of Integral Curves in Adaptive Mesh Refinement Vector Fields

    SciTech Connect

    Deines, Eduard; Weber, Gunther H.; Garth, Christoph; Van Straalen, Brian; Borovikov, Sergey; Martin, Daniel F.; Joy, Kenneth I.

    2011-06-27

    Integral curves, such as streamlines, streaklines, pathlines, and timelines, are an essential tool in the analysis of vector field structures, offering straightforward and intuitive interpretation of visualization results. While such curves have a long-standing tradition in vector field visualization, their application to Adaptive Mesh Refinement (AMR) simulation results poses unique problems. AMR is a highly effective discretization method for a variety of physical simulation problems and has recently been applied to the study of vector fields in flow and magnetohydrodynamic applications. The cell-centered nature of AMR data and discontinuities in the vector field representation arising from AMR level boundaries complicate the application of numerical integration methods to compute integral curves. In this paper, we propose a novel approach to alleviate these problems and show its application to streamline visualization in an AMR model of the magnetic field of the solar system as well as to a simulation of two incompressible viscous vortex rings merging.

  20. Galaxy Mergers with Adaptive Mesh Refinement: Star Formation and Hot Gas Outflow

    SciTech Connect

    Kim, Ji-hoon; Wise, John H.; Abel, Tom; /KIPAC, Menlo Park /Stanford U., Phys. Dept.

    2011-06-22

    In hierarchical structure formation, merging of galaxies is frequent and known to dramatically affect their properties. To comprehend these interactions high-resolution simulations are indispensable because of the nonlinear coupling between pc and Mpc scales. To this end, we present the first adaptive mesh refinement (AMR) simulation of two merging, low mass, initially gas-rich galaxies (1.8 x 10{sup 10} M{sub {circle_dot}} each), including star formation and feedback. With galaxies resolved by {approx} 2 x 10{sup 7} total computational elements, we achieve unprecedented resolution of the multiphase interstellar medium, finding a widespread starburst in the merging galaxies via shock-induced star formation. The high dynamic range of AMR also allows us to follow the interplay between the galaxies and their embedding medium depicting how galactic outflows and a hot metal-rich halo form. These results demonstrate that AMR provides a powerful tool in understanding interacting galaxies.

  1. Damping of spurious numerical reflections off of coarse-fine adaptive mesh refinement grid boundaries

    NASA Astrophysics Data System (ADS)

    Chilton, Sven; Colella, Phillip

    2010-11-01

    Adaptive mesh refinement (AMR) is an efficient technique for solving systems of partial differential equations numerically. The underlying algorithm determines where and when a base spatial and temporal grid must be resolved further in order to achieve the desired precision and accuracy in the numerical solution. However, propagating wave solutions prove problematic for AMR. In systems with low degrees of dissipation (e.g. the Maxwell-Vlasov system) a wave traveling from a finely resolved region into a coarsely resolved region encounters a numerical impedance mismatch, resulting in spurious reflections off of the coarse-fine grid boundary. These reflected waves then become trapped inside the fine region. Here, we present a scheme for damping these spurious reflections. We demonstrate its application to the scalar wave equation and an implementation for Maxwell's Equations. We also discuss a possible extension to the Maxwell-Vlasov system.

  2. A GPU implementation of adaptive mesh refinement to simulate tsunamis generated by landslides

    NASA Astrophysics Data System (ADS)

    de la Asunción, Marc; Castro, Manuel J.

    2016-04-01

    In this work we propose a CUDA implementation for the simulation of landslide-generated tsunamis using a two-layer Savage-Hutter type model and adaptive mesh refinement (AMR). The AMR method consists of dynamically increasing the spatial resolution of the regions of interest of the domain while keeping the rest of the domain at low resolution, thus obtaining better runtimes and similar results compared to increasing the spatial resolution of the entire domain. Our AMR implementation uses a patch-based approach, it supports up to three levels, power-of-two ratios of refinement, different refinement criteria and also several user parameters to control the refinement and clustering behaviour. A strategy based on the variation of the cell values during the simulation is used to interpolate and propagate the values of the fine cells. Several numerical experiments using artificial and realistic scenarios are presented.

  3. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  4. A Parallel Ocean Model With Adaptive Mesh Refinement Capability For Global Ocean Prediction

    SciTech Connect

    Herrnstein, Aaron R.

    2005-12-01

    An ocean model with adaptive mesh refinement (AMR) capability is presented for simulating ocean circulation on decade time scales. The model closely resembles the LLNL ocean general circulation model with some components incorporated from other well known ocean models when appropriate. Spatial components are discretized using finite differences on a staggered grid where tracer and pressure variables are defined at cell centers and velocities at cell vertices (B-grid). Horizontal motion is modeled explicitly with leapfrog and Euler forward-backward time integration, and vertical motion is modeled semi-implicitly. New AMR strategies are presented for horizontal refinement on a B-grid, leapfrog time integration, and time integration of coupled systems with unequal time steps. These AMR capabilities are added to the LLNL software package SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) and validated with standard benchmark tests. The ocean model is built on top of the amended SAMRAI library. The resulting model has the capability to dynamically increase resolution in localized areas of the domain. Limited basin tests are conducted using various refinement criteria and produce convergence trends in the model solution as refinement is increased. Carbon sequestration simulations are performed on decade time scales in domains the size of the North Atlantic and the global ocean. A suggestion is given for refinement criteria in such simulations. AMR predicts maximum pH changes and increases in CO2 concentration near the injection sites that are virtually unattainable with a uniform high resolution due to extremely long run times. Fine scale details near the injection sites are achieved by AMR with shorter run times than the finest uniform resolution tested despite the need for enhanced parallel performance. The North Atlantic simulations show a reduction in passive tracer errors when AMR is applied instead of a uniform coarse resolution. No

  5. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  6. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  7. Adaptive Mesh Refinement With Spectral Accuracy for Magnetohydrodynamics in Two Space Dimensions

    NASA Astrophysics Data System (ADS)

    Rosenberg, D.; Pouquet, A.; Mininni, P.

    2006-12-01

    We examine the effect of accuracy of high-order adaptive mesh refinement (AMR) in the context of a classical configuration of magnetic reconnection in two space dimensions, the so-called Orszag-Tang vortex made up of a magnetic X-point centered on a stagnation point of the velocity. A recently developed spectral-element adaptive refinement incompressible magnetohydrodynamic (MHD) code is applied to simulate this problem. The MHD solver is explicit, and uses the Elsasser formulation on high-order elements. It automatically takes advantage of the adaptive grid mechanics that have been described elsewhere [Rosenberg, Fournier, Fischer, Pouquet, J. Comp. Phys. 215, 59-80 (2006)] in the fluid context, allowing both statically refined and dynamically refined grids. Comparisons with pseudo-spectral computations are performed. Refinement and coarsening criteria are examined, and several tests are described. We show that low-order truncation--even with a comparable number of global degrees of freedom--fails to correctly model some strong (inf-norm) quantities in this problem, even though it satisfies adequately the weak (integrated) balance diagnostics.

  8. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  9. Adaptation of pharmaceutical excipients to FDM 3D printing for the fabrication of patient-tailored immediate release tablets.

    PubMed

    Sadia, Muzna; Sośnicka, Agata; Arafat, Basel; Isreb, Abdullah; Ahmed, Waqar; Kelarakis, Antonios; Alhnan, Mohamed A

    2016-11-20

    This work aims to employ fused deposition modelling 3D printing to fabricate immediate release pharmaceutical tablets with several model drugs. It investigates the addition of non-melting filler to methacrylic matrix to facilitate FDM 3D printing and explore the impact of (i) the nature of filler, (ii) compatibility with the gears of the 3D printer and iii) polymer: filler ratio on the 3D printing process. Amongst the investigated fillers in this work, directly compressible lactose, spray-dried lactose and microcrystalline cellulose showed a level of degradation at 135°C whilst talc and TCP allowed consistent flow of the filament and a successful 3D printing of the tablet. A specially developed universal filament based on pharmaceutically approved methacrylic polymer (Eudragit EPO) and thermally stable filler, TCP (tribasic calcium phosphate) was optimised. Four model drugs with different physicochemical properties were included into ready-to-use mechanically stable tablets with immediate release properties. Following the two thermal processes (hot melt extrusion (HME) and fused deposition modelling (FDM) 3D printing), drug contents were 94.22%, 88.53%, 96.51% and 93.04% for 5-ASA, captopril, theophylline and prednisolone respectively. XRPD indicated that a fraction of 5-ASA, theophylline and prednisolone remained crystalline whilst captopril was in amorphous form. By combining the advantages of thermally stable pharmaceutically approved polymers and fillers, this unique approach provides a low cost production method for on demand manufacturing of individualised dosage forms.

  10. A chimera grid scheme. [multiple overset body-conforming mesh system for finite difference adaptation to complex aircraft configurations

    NASA Technical Reports Server (NTRS)

    Steger, J. L.; Dougherty, F. C.; Benek, J. A.

    1983-01-01

    A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.

  11. Three-dimensional Wavelet-based Adaptive Mesh Refinement for Global Atmospheric Chemical Transport Modeling

    NASA Astrophysics Data System (ADS)

    Rastigejev, Y.; Semakin, A. N.

    2013-12-01

    Accurate numerical simulations of global scale three-dimensional atmospheric chemical transport models (CTMs) are essential for studies of many important atmospheric chemistry problems such as adverse effect of air pollutants on human health, ecosystems and the Earth's climate. These simulations usually require large CPU time due to numerical difficulties associated with a wide range of spatial and temporal scales, nonlinearity and large number of reacting species. In our previous work we have shown that in order to achieve adequate convergence rate and accuracy, the mesh spacing in numerical simulation of global synoptic-scale pollution plume transport must be decreased to a few kilometers. This resolution is difficult to achieve for global CTMs on uniform or quasi-uniform grids. To address the described above difficulty we developed a three-dimensional Wavelet-based Adaptive Mesh Refinement (WAMR) algorithm. The method employs a highly non-uniform adaptive grid with fine resolution over the areas of interest without requiring small grid-spacing throughout the entire domain. The method uses multi-grid iterative solver that naturally takes advantage of a multilevel structure of the adaptive grid. In order to represent the multilevel adaptive grid efficiently, a dynamic data structure based on indirect memory addressing has been developed. The data structure allows rapid access to individual points, fast inter-grid operations and re-gridding. The WAMR method has been implemented on parallel computer architectures. The parallel algorithm is based on run-time partitioning and load-balancing scheme for the adaptive grid. The partitioning scheme maintains locality to reduce communications between computing nodes. The parallel scheme was found to be cost-effective. Specifically we obtained an order of magnitude increase in computational speed for numerical simulations performed on a twelve-core single processor workstation. We have applied the WAMR method for numerical

  12. Methods for high-resolution anisotropic finite element modeling of the human head: automatic MR white matter anisotropy-adaptive mesh generation.

    PubMed

    Lee, Won Hee; Kim, Tae-Seong

    2012-01-01

    This study proposes an advanced finite element (FE) head modeling technique through which high-resolution FE meshes adaptive to the degree of tissue anisotropy can be generated. Our adaptive meshing scheme (called wMesh) uses MRI structural information and fractional anisotropy maps derived from diffusion tensors in the FE mesh generation process, optimally reflecting electrical properties of the human brain. We examined the characteristics of the wMeshes through various qualitative and quantitative comparisons to the conventional FE regular-sized meshes that are non-adaptive to the degree of white matter anisotropy. We investigated numerical differences in the FE forward solutions that include the electrical potential and current density generated by current sources in the brain. The quantitative difference was calculated by two statistical measures of relative difference measure (RDM) and magnification factor (MAG). The results show that the wMeshes are adaptive to the anisotropic density of the WM anisotropy, and they better reflect the density and directionality of tissue conductivity anisotropy. Our comparison results between various anisotropic regular mesh and wMesh models show that there are substantial differences in the EEG forward solutions in the brain (up to RDM=0.48 and MAG=0.63 in the electrical potential, and RDM=0.65 and MAG=0.52 in the current density). Our analysis results indicate that the wMeshes produce different forward solutions that are different from the conventional regular meshes. We present some results that the wMesh head modeling approach enhances the sensitivity and accuracy of the FE solutions at the interfaces or in the regions where the anisotropic conductivities change sharply or their directional changes are complex. The fully automatic wMesh generation technique should be useful for modeling an individual-specific and high-resolution anisotropic FE head model incorporating realistic anisotropic conductivity distributions

  13. Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement

    NASA Technical Reports Server (NTRS)

    Chaderjian, Neal M.

    2012-01-01

    Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.

  14. MASS AND MAGNETIC DISTRIBUTIONS IN SELF-GRAVITATING SUPER-ALFVENIC TURBULENCE WITH ADAPTIVE MESH REFINEMENT

    SciTech Connect

    Collins, David C.; Norman, Michael L.; Padoan, Paolo; Xu Hao

    2011-04-10

    In this work, we present the mass and magnetic distributions found in a recent adaptive mesh refinement magnetohydrodynamic simulation of supersonic, super-Alfvenic, self-gravitating turbulence. Power-law tails are found in both mass density and magnetic field probability density functions, with P({rho}) {proportional_to} {rho}{sup -1.6} and P(B) {proportional_to} B{sup -2.7}. A power-law relationship is also found between magnetic field strength and density, with B {proportional_to} {rho}{sup 0.5}, throughout the collapsing gas. The mass distribution of gravitationally bound cores is shown to be in excellent agreement with recent observation of prestellar cores. The mass-to-flux distribution of cores is also found to be in excellent agreement with recent Zeeman splitting measurements. We also compare the relationship between velocity dispersion and density to the same cores, and find an increasing relationship between the two, with {sigma} {proportional_to} n{sup 0.25}, also in agreement with the observations. We then estimate the potential effects of ambipolar diffusion in our cores and find that due to the weakness of the magnetic field in our simulation, the inclusion of ambipolar diffusion in our simulation will not cause significant alterations of the flow dynamics.

  15. Numerical simulation of current sheet formation in a quasiseparatrix layer using adaptive mesh refinement

    SciTech Connect

    Effenberger, Frederic; Thust, Kay; Grauer, Rainer; Dreher, Juergen; Arnold, Lukas

    2011-03-15

    The formation of a thin current sheet in a magnetic quasiseparatrix layer (QSL) is investigated by means of numerical simulation using a simplified ideal, low-{beta}, MHD model. The initial configuration and driving boundary conditions are relevant to phenomena observed in the solar corona and were studied earlier by Aulanier et al. [Astron. Astrophys. 444, 961 (2005)]. In extension to that work, we use the technique of adaptive mesh refinement (AMR) to significantly enhance the local spatial resolution of the current sheet during its formation, which enables us to follow the evolution into a later stage. Our simulations are in good agreement with the results of Aulanier et al. up to the calculated time in that work. In a later phase, we observe a basically unarrested collapse of the sheet to length scales that are more than one order of magnitude smaller than those reported earlier. The current density attains correspondingly larger maximum values within the sheet. During this thinning process, which is finally limited by lack of resolution even in the AMR studies, the current sheet moves upward, following a global expansion of the magnetic structure during the quasistatic evolution. The sheet is locally one-dimensional and the plasma flow in its vicinity, when transformed into a comoving frame, qualitatively resembles a stagnation point flow. In conclusion, our simulations support the idea that extremely high current densities are generated in the vicinities of QSLs as a response to external perturbations, with no sign of saturation.

  16. GPU accelerated cell-based adaptive mesh refinement on unstructured quadrilateral grid

    NASA Astrophysics Data System (ADS)

    Luo, Xisheng; Wang, Luying; Ran, Wei; Qin, Fenghua

    2016-10-01

    A GPU accelerated inviscid flow solver is developed on an unstructured quadrilateral grid in the present work. For the first time, the cell-based adaptive mesh refinement (AMR) is fully implemented on GPU for the unstructured quadrilateral grid, which greatly reduces the frequency of data exchange between GPU and CPU. Specifically, the AMR is processed with atomic operations to parallelize list operations, and null memory recycling is realized to improve the efficiency of memory utilization. It is found that results obtained by GPUs agree very well with the exact or experimental results in literature. An acceleration ratio of 4 is obtained between the parallel code running on the old GPU GT9800 and the serial code running on E3-1230 V2. With the optimization of configuring a larger L1 cache and adopting Shared Memory based atomic operations on the newer GPU C2050, an acceleration ratio of 20 is achieved. The parallelized cell-based AMR processes have achieved 2x speedup on GT9800 and 18x on Tesla C2050, which demonstrates that parallel running of the cell-based AMR method on GPU is feasible and efficient. Our results also indicate that the new development of GPU architecture benefits the fluid dynamics computing significantly.

  17. Relativistic Flows Using Spatial And Temporal Adaptive Structured Mesh Refinement. I. Hydrodynamics

    SciTech Connect

    Wang, Peng; Abel, Tom; Zhang, Weiqun; /KIPAC, Menlo Park

    2007-04-02

    Astrophysical relativistic flow problems require high resolution three-dimensional numerical simulations. In this paper, we describe a new parallel three-dimensional code for simulations of special relativistic hydrodynamics (SRHD) using both spatially and temporally structured adaptive mesh refinement (AMR). We used method of lines to discrete SRHD equations spatially and used a total variation diminishing (TVD) Runge-Kutta scheme for time integration. For spatial reconstruction, we have implemented piecewise linear method (PLM), piecewise parabolic method (PPM), third order convex essentially non-oscillatory (CENO) and third and fifth order weighted essentially non-oscillatory (WENO) schemes. Flux is computed using either direct flux reconstruction or approximate Riemann solvers including HLL, modified Marquina flux, local Lax-Friedrichs flux formulas and HLLC. The AMR part of the code is built on top of the cosmological Eulerian AMR code enzo, which uses the Berger-Colella AMR algorithm and is parallel with dynamical load balancing using the widely available Message Passing Interface library. We discuss the coupling of the AMR framework with the relativistic solvers and show its performance on eleven test problems.

  18. Adaptive Mesh Refinement for a High-Symmetry Singular Euler Flow

    NASA Astrophysics Data System (ADS)

    Germaschewski, K.; Bhattacharjee, A.; Grauer, R.

    2002-11-01

    Starting from a highly symmetric initial condition motivated by the work of Kida [J. Phys. Soc Jpn. 54, 2132 (1995)] and Boratav and Pelz [Phys. Fluids 6, 2757 (1994)], we use the technique of block-structured adaptive mesh refinement (AMR) to numerically investigate the development of a self-similar singular solution to the incompressible Euler equations. The scheme, previously used by Grauer et al [Phys. Rev. Lett. 84, 4850 (1998)], is particularly well suited to follow the development of singular structures as it allows for effective resolutions far beyond those accessible using fixed grid algorithms. A self-similar collapse is observed in the simulation, where the maximum vorticity blows up as 1/(t_crit-t). Ng and Bhattacharjee [Phys Rev E 54, 1530 (1996)] have presented a sufficient condition for a finite-time singularity in this highly symmetric flow involving the fourth-order spatial derivative of the pressure at and near the origin. We test this sufficient condition and investigate the evolution of the spatial range over which this condition holds in our numerical results. We also demonstrate numerically that this singularity is unstable because in a full simulation that does not build in the symmetries of the initial condition, small perturbations introduced by AMR lead to nonsymmetric evolution of the vortices.

  19. Parallel adaptive mesh refinement method based on WENO finite difference scheme for the simulation of multi-dimensional detonation

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Dong, XinZhuang; Shu, Chi-Wang

    2015-10-01

    For numerical simulation of detonation, computational cost using uniform meshes is large due to the vast separation in both time and space scales. Adaptive mesh refinement (AMR) is advantageous for problems with vastly different scales. This paper aims to propose an AMR method with high order accuracy for numerical investigation of multi-dimensional detonation. A well-designed AMR method based on finite difference weighted essentially non-oscillatory (WENO) scheme, named as AMR&WENO is proposed. A new cell-based data structure is used to organize the adaptive meshes. The new data structure makes it possible for cells to communicate with each other quickly and easily. In order to develop an AMR method with high order accuracy, high order prolongations in both space and time are utilized in the data prolongation procedure. Based on the message passing interface (MPI) platform, we have developed a workload balancing parallel AMR&WENO code using the Hilbert space-filling curve algorithm. Our numerical experiments with detonation simulations indicate that the AMR&WENO is accurate and has a high resolution. Moreover, we evaluate and compare the performance of the uniform mesh WENO scheme and the parallel AMR&WENO method. The comparison results provide us further insight into the high performance of the parallel AMR&WENO method.

  20. Higher-order conservative interpolation between control-volume meshes: Application to advection and multiphase flow problems with dynamic mesh adaptivity

    NASA Astrophysics Data System (ADS)

    Adam, A.; Pavlidis, D.; Percival, J. R.; Salinas, P.; Xie, Z.; Fang, F.; Pain, C. C.; Muggeridge, A. H.; Jackson, M. D.

    2016-09-01

    A general, higher-order, conservative and bounded interpolation for the dynamic and adaptive meshing of control-volume fields dual to continuous and discontinuous finite element representations is presented. Existing techniques such as node-wise interpolation are not conservative and do not readily generalise to discontinuous fields, whilst conservative methods such as Grandy interpolation are often too diffusive. The new method uses control-volume Galerkin projection to interpolate between control-volume fields. Bounded solutions are ensured by using a post-interpolation diffusive correction. Example applications of the method to interface capturing during advection and also to the modelling of multiphase porous media flow are presented to demonstrate the generality and robustness of the approach.

  1. Adaptation of the three-dimensional wisdom scale (3D-WS) for the Korean cultural context.

    PubMed

    Kim, Seungyoun; Knight, Bob G

    2014-10-23

    ABSTRACT Background: Previous research on wisdom has suggested that wisdom is comprised of cognitive, reflective, and affective components and has developed and validated wisdom measures based on samples from Western countries. To apply the measurement to Eastern cultures, the present study revised an existing wisdom scale, the three-dimensional wisdom scale (3D-WS, Ardelt, 2003) for the Korean cultural context. Methods: Participants included 189 Korean heritage adults (age range 19-96) living in Los Angeles. We added a culturally specific factor of wisdom to the 3D-WS: Modesty and Unobtrusiveness (Yang, 2001), which captures an Eastern aspect of wisdom. The structure and psychometrics of the scale were tested. By latent cluster analysis, we determined acculturation subgroups and examined group differences in the means of factors in the revised wisdom scale (3D-WS-K). Results: Three factors, Cognitive Flexibility, Viewpoint Relativism, and Empathic Modesty were found using confirmatory factor analysis. Respondents with high biculturalism were higher on Viewpoint Relativism and lower on Empathic Modesty. Conclusion: This study discovered that a revised wisdom scale had a distinct factor structure and item content in a Korean heritage sample. We also found acculturation influences on the meaning of wisdom.

  2. SU-D-207-04: GPU-Based 4D Cone-Beam CT Reconstruction Using Adaptive Meshing Method

    SciTech Connect

    Zhong, Z; Gu, X; Iyengar, P; Mao, W; Wang, J; Guo, X

    2015-06-15

    Purpose: Due to the limited number of projections at each phase, the image quality of a four-dimensional cone-beam CT (4D-CBCT) is often degraded, which decreases the accuracy of subsequent motion modeling. One of the promising methods is the simultaneous motion estimation and image reconstruction (SMEIR) approach. The objective of this work is to enhance the computational speed of the SMEIR algorithm using adaptive feature-based tetrahedral meshing and GPU-based parallelization. Methods: The first step is to generate the tetrahedral mesh based on the features of a reference phase 4D-CBCT, so that the deformation can be well captured and accurately diffused from the mesh vertices to voxels of the image volume. After the mesh generation, the updated motion model and other phases of 4D-CBCT can be obtained by matching the 4D-CBCT projection images at each phase with the corresponding forward projections of the deformed reference phase of 4D-CBCT. The entire process of this 4D-CBCT reconstruction method is implemented on GPU, resulting in significantly increasing the computational efficiency due to its tremendous parallel computing ability. Results: A 4D XCAT digital phantom was used to test the proposed mesh-based image reconstruction algorithm. The image Result shows both bone structures and inside of the lung are well-preserved and the tumor position can be well captured. Compared to the previous voxel-based CPU implementation of SMEIR, the proposed method is about 157 times faster for reconstructing a 10 -phase 4D-CBCT with dimension 256×256×150. Conclusion: The GPU-based parallel 4D CBCT reconstruction method uses the feature-based mesh for estimating motion model and demonstrates equivalent image Result with previous voxel-based SMEIR approach, with significantly improved computational speed.

  3. Analysis of adaptive mesh refinement for IMEX discontinuous Galerkin solutions of the compressible Euler equations with application to atmospheric simulations

    NASA Astrophysics Data System (ADS)

    Kopera, Michal A.; Giraldo, Francis X.

    2014-10-01

    The resolutions of interests in atmospheric simulations require prohibitively large computational resources. Adaptive mesh refinement (AMR) tries to mitigate this problem by putting high resolution in crucial areas of the domain. We investigate the performance of a tree-based AMR algorithm for the high order discontinuous Galerkin method on quadrilateral grids with non-conforming elements. We perform a detailed analysis of the cost of AMR by comparing this to uniform reference simulations of two standard atmospheric test cases: density current and rising thermal bubble. The analysis shows up to 15 times speed-up of the AMR simulations with the cost of mesh adaptation below 1% of the total runtime. We pay particular attention to the implicit-explicit (IMEX) time integration methods and show that the ARK2 method is more robust with respect to dynamically adapting meshes than BDF2. Preliminary analysis of preconditioning reveals that it can be an important factor in the AMR overhead. The compiler optimizations provide significant runtime reduction and positively affect the effectiveness of AMR allowing for speed-ups greater than it would follow from the simple performance model.

  4. Numerical Modelling of Volcanic Ash Settling in Water Using Adaptive Unstructured Meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R.

    2011-12-01

    At the bottom of the world's oceans lies layer after layer of ash deposited from past volcanic eruptions. Correct interpretation of these layers can provide important constraints on the duration and frequency of volcanism, but requires a full understanding of the complex multi-phase settling and deposition process. Analogue experiments of tephra settling through a tank of water demonstrate that small ash particles can either settle individually, or collectively as a gravitationally unstable ash-laden plume. These plumes are generated when the concentration of particles exceeds a certain threshold such that the density of the tephra-water mixture is sufficiently large relative to the underlying particle-free water for a gravitational Rayleigh-Taylor instability to develop. These ash-laden plumes are observed to descend as a vertical density current at a velocity much greater than that of single particles, which has important implications for the emplacement of tephra deposits on the seabed. To extend the results of laboratory experiments to large scales and explore the conditions under which vertical density currents may form and persist, we have developed a multi-phase extension to Fluidity, a combined finite element / control volume CFD code that uses adaptive unstructured meshes. As a model validation, we present two- and three-dimensional simulations of tephra plume formation in a water tank that replicate laboratory experiments (Carey, 1997, doi:10.1130/0091-7613(1997)025<0839:IOCSOT>2.3.CO;2). An inflow boundary condition at the top of the domain allows particles to flux in at a constant rate of 0.472 gm-2s-1, forming a near-surface layer of tephra particles, which initially settle individually at the predicted Stokes velocity of 1.7 mms-1. As more tephra enters the water and the particle concentration increases, the layer eventually becomes unstable and plumes begin to form, descending with velocities more than ten times greater than those of individual

  5. A learning heuristic for space mapping and searching self-organizing systems using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Phillips, Carolyn L.

    2014-09-01

    In a complex self-organizing system, small changes in the interactions between the system's components can result in different emergent macrostructures or macrobehavior. In chemical engineering and material science, such spontaneously self-assembling systems, using polymers, nanoscale or colloidal-scale particles, DNA, or other precursors, are an attractive way to create materials that are precisely engineered at a fine scale. Changes to the interactions can often be described by a set of parameters. Different contiguous regions in this parameter space correspond to different ordered states. Since these ordered states are emergent, often experiment, not analysis, is necessary to create a diagram of ordered states over the parameter space. By issuing queries to points in the parameter space (e.g., performing a computational or physical experiment), ordered states can be discovered and mapped. Queries can be costly in terms of resources or time, however. In general, one would like to learn the most information using the fewest queries. Here we introduce a learning heuristic for issuing queries to map and search a two-dimensional parameter space. Using a method inspired by adaptive mesh refinement, the heuristic iteratively issues batches of queries to be executed in parallel based on past information. By adjusting the search criteria, different types of searches (for example, a uniform search, exploring boundaries, sampling all regions equally) can be flexibly implemented. We show that this method will densely search the space, while preferentially targeting certain features. Using numerical examples, including a study simulating the self-assembly of complex crystals, we show how this heuristic can discover new regions and map boundaries more accurately than a uniformly distributed set of queries.

  6. Parallel computation of three-dimensional flows using overlapping grids with adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Henshaw, William D.; Schwendeman, Donald W.

    2008-08-01

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the accuracy of the numerical solutions is assessed quantitatively through an estimation of the errors from a grid convergence study. The parallel performance of the

  7. Parallel Computation of Three-Dimensional Flows using Overlapping Grids with Adaptive Mesh Refinement

    SciTech Connect

    Henshaw, W; Schwendeman, D

    2007-11-15

    This paper describes an approach for the numerical solution of time-dependent partial differential equations in complex three-dimensional domains. The domains are represented by overlapping structured grids, and block-structured adaptive mesh refinement (AMR) is employed to locally increase the grid resolution. In addition, the numerical method is implemented on parallel distributed-memory computers using a domain-decomposition approach. The implementation is flexible so that each base grid within the overlapping grid structure and its associated refinement grids can be independently partitioned over a chosen set of processors. A modified bin-packing algorithm is used to specify the partition for each grid so that the computational work is evenly distributed amongst the processors. All components of the AMR algorithm such as error estimation, regridding, and interpolation are performed in parallel. The parallel time-stepping algorithm is illustrated for initial-boundary-value problems involving a linear advection-diffusion equation and the (nonlinear) reactive Euler equations. Numerical results are presented for both equations to demonstrate the accuracy and correctness of the parallel approach. Exact solutions of the advection-diffusion equation are constructed, and these are used to check the corresponding numerical solutions for a variety of tests involving different overlapping grids, different numbers of refinement levels and refinement ratios, and different numbers of processors. The problem of planar shock diffraction by a sphere is considered as an illustration of the numerical approach for the Euler equations, and a problem involving the initiation of a detonation from a hot spot in a T-shaped pipe is considered to demonstrate the numerical approach for the reactive case. For both problems, the solutions are shown to be well resolved on the finest grid. The parallel performance of the approach is examined in detail for the shock diffraction problem.

  8. Projections of grounding line retreat in West Antarctica carried out with an adaptive mesh model

    NASA Astrophysics Data System (ADS)

    Cornford, Stephen; Payne, Antony; Martin, Daniel; Le Brocq, Anne

    2013-04-01

    Present and future sea level rise associated with mass loss from West Antarctica is typically attributed to marine glaciers retreating in response to a warming ocean. Warmer waters melt the floating ice shelves that restrain some, if not all, marine glaciers, and the glaciers themselves respond by speeding up. That leads to thinning and in turn grounding line retreat. Satellite observations indicate that Amundsen Sea Embayment and, in particular, Pine Island Glacier, are undergoing this kind of dynamic change today. Numerical models, however, struggle to reproduce the observed behavior because either high resolution or some other kind special treatment is required at the grounding line. We present 200-year projections of three major glacier systems of West Antarctica: those that drain into the Amundsen Sea , the Filchner-Ronne Ice Shelf and the Ross Ice shelf. We do so using the newly developed BISICLES ice­ sheet model, which employs adaptive ­mesh refinement to maintain sub-kilometer resolution close to the grounding line and coarser resolution elsewhere. Ice accumulation and ice­ shelf melt-rate are derived from a range of models of the Antarctic atmosphere and ocean forced by the SRES A1B and E1 scenarios. We find that a substantial proportion of the grounding line in West Antarctica retreats, however the total sea level rise is less than 50 mm by 2100, and less than 100 mm by 2200. The lion's share of the mass loss is attributed to Pine Island Glacier, while its immediate neighbor Thwaites Glacier does not retreat until the end of the simulations.

  9. GAMMA-RAY BURST DYNAMICS AND AFTERGLOW RADIATION FROM ADAPTIVE MESH REFINEMENT, SPECIAL RELATIVISTIC HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    De Colle, Fabio; Ramirez-Ruiz, Enrico; Granot, Jonathan; Lopez-Camara, Diego

    2012-02-20

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with {rho}{proportional_to}r{sup -k}, bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the

  10. Gamma-Ray Burst Dynamics and Afterglow Radiation from Adaptive Mesh Refinement, Special Relativistic Hydrodynamic Simulations

    NASA Astrophysics Data System (ADS)

    De Colle, Fabio; Granot, Jonathan; López-Cámara, Diego; Ramirez-Ruiz, Enrico

    2012-02-01

    We report on the development of Mezcal-SRHD, a new adaptive mesh refinement, special relativistic hydrodynamics (SRHD) code, developed with the aim of studying the highly relativistic flows in gamma-ray burst sources. The SRHD equations are solved using finite-volume conservative solvers, with second-order interpolation in space and time. The correct implementation of the algorithms is verified by one-dimensional (1D) and multi-dimensional tests. The code is then applied to study the propagation of 1D spherical impulsive blast waves expanding in a stratified medium with ρvpropr -k , bridging between the relativistic and Newtonian phases (which are described by the Blandford-McKee and Sedov-Taylor self-similar solutions, respectively), as well as to a two-dimensional (2D) cylindrically symmetric impulsive jet propagating in a constant density medium. It is shown that the deceleration to nonrelativistic speeds in one dimension occurs on scales significantly larger than the Sedov length. This transition is further delayed with respect to the Sedov length as the degree of stratification of the ambient medium is increased. This result, together with the scaling of position, Lorentz factor, and the shock velocity as a function of time and shock radius, is explained here using a simple analytical model based on energy conservation. The method used for calculating the afterglow radiation by post-processing the results of the simulations is described in detail. The light curves computed using the results of 1D numerical simulations during the relativistic stage correctly reproduce those calculated assuming the self-similar Blandford-McKee solution for the evolution of the flow. The jet dynamics from our 2D simulations and the resulting afterglow light curves, including the jet break, are in good agreement with those presented in previous works. Finally, we show how the details of the dynamics critically depend on properly resolving the structure of the relativistic flow.

  11. A 3D front tracking method on a CPU/GPU system

    SciTech Connect

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  12. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  13. Mesh generation and computational modeling techniques for bioimpedance measurements: an example using the VHP data

    NASA Astrophysics Data System (ADS)

    Danilov, A. A.; Salamatova, V. Yu; Vassilevski, Yu V.

    2012-12-01

    Here, a workflow for high-resolution efficient numerical modeling of bioimpedance measurements is suggested that includes 3D image segmentation, adaptive mesh generation, finite-element discretization, and the analysis of simulation results. Using the adaptive unstructured tetrahedral meshes enables to decrease significantly a number of mesh elements while keeping model accuracy. The numerical results illustrate current, potential, and sensitivity field distributions for a conventional Kubicek-like scheme of bioimpedance measurements using segmented geometric model of human torso based on Visible Human Project data. The whole body VHP man computational mesh is constructed that contains 574 thousand vertices and 3.3 million tetrahedrons.

  14. Computations of Unsteady Viscous Compressible Flows Using Adaptive Mesh Refinement in Curvilinear Body-fitted Grid Systems

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Modiano, David; Colella, Phillip

    1994-01-01

    A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.

  15. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  16. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    NASA Astrophysics Data System (ADS)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-04-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.

  17. Simulations of recoiling black holes: adaptive mesh refinement and radiative transfer

    NASA Astrophysics Data System (ADS)

    Meliani, Zakaria; Mizuno, Yosuke; Olivares, Hector; Porth, Oliver; Rezzolla, Luciano; Younsi, Ziri

    2017-01-01

    Context. In many astrophysical phenomena, and especially in those that involve the high-energy regimes that always accompany the astronomical phenomenology of black holes and neutron stars, physical conditions that are achieved are extreme in terms of speeds, temperatures, and gravitational fields. In such relativistic regimes, numerical calculations are the only tool to accurately model the dynamics of the flows and the transport of radiation in the accreting matter. Aims: We here continue our effort of modelling the behaviour of matter when it orbits or is accreted onto a generic black hole by developing a new numerical code that employs advanced techniques geared towards solving the equations of general-relativistic hydrodynamics. Methods: More specifically, the new code employs a number of high-resolution shock-capturing Riemann solvers and reconstruction algorithms, exploiting the enhanced accuracy and the reduced computational cost of adaptive mesh-refinement (AMR) techniques. In addition, the code makes use of sophisticated ray-tracing libraries that, coupled with general-relativistic radiation-transfer calculations, allow us to accurately compute the electromagnetic emissions from such accretion flows. Results: We validate the new code by presenting an extensive series of stationary accretion flows either in spherical or axial symmetry that are performed either in two or three spatial dimensions. In addition, we consider the highly nonlinear scenario of a recoiling black hole produced in the merger of a supermassive black-hole binary interacting with the surrounding circumbinary disc. In this way, we can present for the first time ray-traced images of the shocked fluid and the light curve resulting from consistent general-relativistic radiation-transport calculations from this process. Conclusions: The work presented here lays the ground for the development of a generic computational infrastructure employing AMR techniques to accurately and self

  18. Cross-axis adaptation improves 3D vestibulo-ocular reflex alignment during chronic stimulation via a head-mounted multichannel vestibular prosthesis

    PubMed Central

    Dai, Chenkai; Fridman, Gene Y.; Chiang, Bryce; Davidovics, Natan; Melvin, Thuy-Anh; Cullen, Kathleen E.; Della Santina, Charles C.

    2012-01-01

    By sensing three-dimensional (3D) head rotation and electrically stimulating the three ampullary branches of a vestibular nerve to encode head angular velocity, a multichannel vestibular prosthesis (MVP) can restore vestibular sensation to individuals disabled by loss of vestibular hair cell function. However, current spread to afferent fibers innervating non-targeted canals and otolith endorgans can distort the vestibular nerve activation pattern, causing misalignment between the perceived and actual axis of head rotation. We hypothesized that over time, central neural mechanisms can adapt to correct this misalignment. To test this, we rendered five chinchillas vestibular-deficient via bilateral gentamicin treatment and unilaterally implanted them with a head mounted MVP. Comparison of 3D angular vestibulo-ocular reflex (aVOR) responses during 2 Hz, 50°/s peak horizontal sinusoidal head rotations in darkness on the first, third and seventh days of continual MVP use revealed that eye responses about the intended axis remained stable (at about 70% of the normal gain) while misalignment improved significantly by the end of one week of prosthetic stimulation. A comparable time course of improvement was also observed for head rotations about the other two semicircular canal axes and at every stimulus frequency examined (0.2–5 Hz). In addition, the extent of disconjugacy between the two eyes progressively improved during the same time window. These results indicate that the central nervous system rapidly adapts to multichannel prosthetic vestibular stimulation to markedly improve 3D aVOR alignment within the first week after activation. Similar adaptive improvements are likely to occur in other species, including humans. PMID:21374081

  19. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  20. A solution-adaptive mesh algorithm for dynamic/static refinement of two and three dimensional grids

    NASA Technical Reports Server (NTRS)

    Benson, Rusty A.; Mcrae, D. S.

    1991-01-01

    An adaptive grid algorithm has been developed in two and three dimensions that can be used dynamically with a solver or as part of a grid refinement process. The algorithm employs a transformation from the Cartesian coordinate system to a general coordinate space, which is defined as a parallelepiped in three dimensions. A weighting function, independent for each coordinate direction, is developed that will provide the desired refinement criteria in regions of high solution gradient. The adaptation is performed in the general coordinate space and the new grid locations are returned to the Cartesian space via a simple, one-step inverse mapping. The algorithm for relocation of the mesh points in the parametric space is based on the center of mass for distributed weights. Dynamic solution-adaptive results are presented for laminar flows in two and three dimensions.

  1. XML3D and Xflow: combining declarative 3D for the Web with generic data flows.

    PubMed

    Klein, Felix; Sons, Kristian; Rubinstein, Dmitri; Slusallek, Philipp

    2013-01-01

    Researchers have combined XML3D, which provides declarative, interactive 3D scene descriptions based on HTML5, with Xflow, a language for declarative, high-performance data processing. The result lets Web developers combine a 3D scene graph with data flows for dynamic meshes, animations, image processing, and postprocessing.

  2. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    NASA Astrophysics Data System (ADS)

    Greene, Patrick; Schofield, Sam; Nourgaliev, Robert

    2016-11-01

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin (DG) projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. The method retains the excellent smoothing capabilities of condition number relaxation, while providing a method for clustering mesh cells near regions of interest. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  3. A new type of color-coded light structures for an adapted and rapid determination of point correspondences for 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus

    2011-05-01

    This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.

  4. Postprocessing of compressed 3D graphic data by using subdivision

    NASA Astrophysics Data System (ADS)

    Cheang, Ka Man; Li, Jiankun; Kuo, C.-C. Jay

    1998-10-01

    In this work, we present a postprocessing technique applied to a 3D graphic model of a lower resolution to obtain a visually more pleasant representation. Our method is an improved version of the Butterfly subdivision scheme developed by Zorin et al. Our main contribution is to exploit the flatness information of local areas of a 3D graphic model for adaptive refinement. Consequently, we can avoid unnecessary subdivision in regions which are relatively flat. The proposed new algorithm not only reduces the computational complexity but also saves the storage space. With the hierarchical mesh compression method developed by Li and Kuo as the baseline coding method, we show that the postprocessing technique can greatly improve the visual quality of the decoded 3D graphic model.

  5. Robust hashing for 3D models

    NASA Astrophysics Data System (ADS)

    Berchtold, Waldemar; Schäfer, Marcel; Rettig, Michael; Steinebach, Martin

    2014-02-01

    3D models and applications are of utmost interest in both science and industry. With the increment of their usage, their number and thereby the challenge to correctly identify them increases. Content identification is commonly done by cryptographic hashes. However, they fail as a solution in application scenarios such as computer aided design (CAD), scientific visualization or video games, because even the smallest alteration of the 3D model, e.g. conversion or compression operations, massively changes the cryptographic hash as well. Therefore, this work presents a robust hashing algorithm for 3D mesh data. The algorithm applies several different bit extraction methods. They are built to resist desired alterations of the model as well as malicious attacks intending to prevent correct allocation. The different bit extraction methods are tested against each other and, as far as possible, the hashing algorithm is compared to the state of the art. The parameters tested are robustness, security and runtime performance as well as False Acceptance Rate (FAR) and False Rejection Rate (FRR), also the probability calculation of hash collision is included. The introduced hashing algorithm is kept adaptive e.g. in hash length, to serve as a proper tool for all applications in practice.

  6. Infrastructure for 3D Imaging Test Bed

    DTIC Science & Technology

    2007-05-11

    analysis. (c.) Real time detection & analysis of human gait: using a video camera we capture walking human silhouette for pattern modeling and gait ... analysis . Fig. 5 shows the scanning result result that is fed into a Geo-magic software tool for 3D meshing. Fig. 5: 3D scanning result In

  7. Evaluation of a prototype 3D ultrasound system for multimodality imaging of cervical nodes for adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Fraser, Danielle; Fava, Palma; Cury, Fabio; Vuong, Te; Falco, Tony; Verhaegen, Frank

    2007-03-01

    Sonography has good topographic accuracy for superficial lymph node assessment in patients with head and neck cancers. It is therefore an ideal non-invasive tool for precise inter-fraction volumetric analysis of enlarged cervical nodes. In addition, when registered with computed tomography (CT) images, ultrasound information may improve target volume delineation and facilitate image-guided adaptive radiation therapy. A feasibility study was developed to evaluate the use of a prototype ultrasound system capable of three dimensional visualization and multi-modality image fusion for cervical node geometry. A ceiling-mounted optical tracking camera recorded the position and orientation of a transducer in order to synchronize the transducer's position with respect to the room's coordinate system. Tracking systems were installed in both the CT-simulator and radiation therapy treatment rooms. Serial images were collected at the time of treatment planning and at subsequent treatment fractions. Volume reconstruction was performed by generating surfaces around contours. The quality of the spatial reconstruction and semi-automatic segmentation was highly dependent on the system's ability to track the transducer throughout each scan procedure. The ultrasound information provided enhanced soft tissue contrast and facilitated node delineation. Manual segmentation was the preferred method to contour structures due to their sonographic topography.

  8. Optimization of multiple turbine arrays in a channel with tidally reversing flow by numerical modelling with adaptive mesh.

    PubMed

    Divett, T; Vennell, R; Stevens, C

    2013-02-28

    At tidal energy sites, large arrays of hundreds of turbines will be required to generate economically significant amounts of energy. Owing to wake effects within the array, the placement of turbines within will be vital to capturing the maximum energy from the resource. This study presents preliminary results using Gerris, an adaptive mesh flow solver, to investigate the flow through four different arrays of 15 turbines each. The goal is to optimize the position of turbines within an array in an idealized channel. The turbines are represented as areas of increased bottom friction in an adaptive mesh model so that the flow and power capture in tidally reversing flow through large arrays can be studied. The effect of oscillating tides is studied, with interesting dynamics generated as the tidal current reverses direction, forcing turbulent flow through the array. The energy removed from the flow by each of the four arrays is compared over a tidal cycle. A staggered array is found to extract 54 per cent more energy than a non-staggered array. Furthermore, an array positioned to one side of the channel is found to remove a similar amount of energy compared with an array in the centre of the channel.

  9. GEN3D Ver. 1.37

    SciTech Connect

    2012-01-04

    GEN3D is a three-dimensional mesh generation program. The three-dimensional mesh is generated by mapping a two-dimensional mesh into threedimensions according to one of four types of transformations: translating, rotating, mapping onto a spherical surface, and mapping onto a cylindrical surface. The generated three-dimensional mesh can then be reoriented by offsetting, reflecting about an axis, and revolving about an axis. GEN3D can be used to mesh geometries that are axisymmetric or planar, but, due to three-dimensional loading or boundary conditions, require a three-dimensional finite element mesh and analysis. More importantly, it can be used to mesh complex three-dimensional geometries composed of several sections when the sections can be defined in terms of transformations of two dimensional geometries. The code GJOIN is then used to join the separate sections into a single body. GEN3D reads and writes twodimensional and threedimensional mesh databases in the GENESIS database format; therefore, it is compatible with the preprocessing, postprocessing, and analysis codes used by the Engineering Analysis Department at Sandia National Laboratories, Albuquerque, NM.

  10. Development 3D model of adaptation of the Azerbaijan coastal zone at the various levels of Caspian Sea

    NASA Astrophysics Data System (ADS)

    Mammadov, Ramiz

    2013-04-01

    coastal areas at hydraulic engineering projects the sea level should be considered as multistage process, what we have considered by development of adaptation of a coastal zone The exact three-dimensional map of a coastal zone has been created. For different scenario sea levels, or example, -30.0; -29.0; -28.0; -27.0; -26.0; -25.0 and -24.0 exact coastal lines have been certain. Further maps of a vegetative cover, ground, social and economic and ecological conditions have been developed for different level and respective alterations are certain. More vulnerable coastal zone, flooded area and socio-economic damage were estimated.

  11. From Monotonous Hop-and-Sink Swimming to Constant Gliding via Chaotic Motions in 3D: Is There Adaptive Behavior in Planktonic Micro-Crustaceans?

    NASA Astrophysics Data System (ADS)

    Strickler, J. R.

    2007-12-01

    Planktonic micro-crustaceans, such as Daphnia, Copepod, and Cyclops, swim in the 3D environment of water and feed on suspended material, mostly algae and bacteria. Their mechanisms for swimming differ; some use their swimming legs to produce one hop per second resulting in a speed of one body-length per second, while others scan water volumes with their mouthparts and glide through the water column at 1 to 10 body-lengths per second. However, our observations show that these speeds are modulated. The question to be discussed will be whether or not these modulations show adaptive behavior taking food quality and food abundance as criteria for the swimming performances. Additionally, we investigated the degree these temporal motion patterns are dependant on the sizes, and therefore, on the Reynolds number of the animals.

  12. Boundary element solutions for broad-band 3-D geo-electromagnetic problems accelerated by an adaptive multilevel fast multipole method

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Kalscheuer, Thomas; Greenhalgh, Stewart; Maurer, Hansruedi

    2013-02-01

    We have developed a generalized and stable surface integral formula for 3-D uniform inducing field and plane wave electromagnetic induction problems, which works reliably over a wide frequency range. Vector surface electric currents and magnetic currents, scalar surface electric charges and magnetic charges are treated as the variables. This surface integral formula is successfully applied to compute the electromagnetic responses of 3-D topography to low frequency magnetotelluric and high frequency radio-magnetotelluric fields. The standard boundary element method which is used to solve this surface integral formula quickly exceeds the memory capacity of modern computers for problems involving hundreds of thousands of unknowns. To make the surface integral formulation applicable and capable of dealing with large-scale 3-D geo-electromagnetic problems, we have developed a matrix-free adaptive multilevel fast multipole boundary element solver. By means of the fast multipole approach, the time-complexity of solving the final system of linear equations is reduced to O(m log m) and the memory cost is reduced to O(m), where m is the number of unknowns. The analytical solutions for a half-space model were used to verify our numerical solutions over the frequency range 0.001-300 kHz. In addition, our numerical solution shows excellent agreement with a published numerical solution for an edge-based finite-element method on a trapezoidal hill model at a frequency of 2 Hz. Then, a high frequency simulation for a similar trapezoidal hill model was used to study the effects of displacement currents in the radio-magnetotelluric frequency range. Finally, the newly developed algorithm was applied to study the effect of moderate topography and to evaluate the applicability of a 2-D RMT inversion code that assumes a flat air-Earth interface, on RMT field data collected at Smørgrav, southern Norway. This paper constitutes the first part of a hybrid boundary element-finite element

  13. 3-D transient analysis of pebble-bed HTGR by TORT-TD/ATTICA3D

    SciTech Connect

    Seubert, A.; Sureda, A.; Lapins, J.; Buck, M.; Bader, J.; Laurien, E.

    2012-07-01

    As most of the acceptance criteria are local core parameters, application of transient 3-D fine mesh neutron transport and thermal hydraulics coupled codes is mandatory for best estimate evaluations of safety margins. This also applies to high-temperature gas cooled reactors (HTGR). Application of 3-D fine-mesh transient transport codes using few energy groups coupled with 3-D thermal hydraulics codes becomes feasible in view of increasing computing power. This paper describes the discrete ordinates based coupled code system TORT-TD/ATTICA3D that has recently been extended by a fine-mesh diffusion solver. Based on transient analyses for the PBMR-400 design, the transport/diffusion capabilities are demonstrated and 3-D local flux and power redistribution effects during a partial control rod withdrawal are shown. (authors)

  14. Interactive 3d Landscapes on Line

    NASA Astrophysics Data System (ADS)

    Fanini, B.; Calori, L.; Ferdani, D.; Pescarin, S.

    2011-09-01

    The paper describes challenges identified while developing browser embedded 3D landscape rendering applications, our current approach and work-flow and how recent development in browser technologies could affect. All the data, even if processed by optimization and decimation tools, result in very huge databases that require paging, streaming and Level-of-Detail techniques to be implemented to allow remote web based real time fruition. Our approach has been to select an open source scene-graph based visual simulation library with sufficient performance and flexibility and adapt it to the web by providing a browser plug-in. Within the current Montegrotto VR Project, content produced with new pipelines has been integrated. The whole Montegrotto Town has been generated procedurally by CityEngine. We used this procedural approach, based on algorithms and procedures because it is particularly functional to create extensive and credible urban reconstructions. To create the archaeological sites we used optimized mesh acquired with laser scanning and photogrammetry techniques whereas to realize the 3D reconstructions of the main historical buildings we adopted computer-graphic software like blender and 3ds Max. At the final stage, semi-automatic tools have been developed and used up to prepare and clusterise 3D models and scene graph routes for web publishing. Vegetation generators have also been used with the goal of populating the virtual scene to enhance the user perceived realism during the navigation experience. After the description of 3D modelling and optimization techniques, the paper will focus and discuss its results and expectations.

  15. An adaptive 3D region growing algorithm to automatically segment and identify thoracic aorta and its centerline using computed tomography angiography scans

    NASA Astrophysics Data System (ADS)

    Ferreira, F.; Dehmeshki, J.; Amin, H.; Dehkordi, M. E.; Belli, A.; Jouannic, A.; Qanadli, S.

    2010-03-01

    Thoracic Aortic Aneurysm (TAA) is a localized swelling of the thoracic aorta. The progressive growth of an aneurysm may eventually cause a rupture if not diagnosed or treated. This necessitates the need for an accurate measurement which in turn calls for the accurate segmentation of the aneurysm regions. Computer Aided Detection (CAD) is a tool to automatically detect and segment the TAA in the Computer tomography angiography (CTA) images. The fundamental major step of developing such a system is to develop a robust method for the detection of main vessel and measuring its diameters. In this paper we propose a novel adaptive method to simultaneously segment the thoracic aorta and to indentify its center line. For this purpose, an adaptive parametric 3D region growing is proposed in which its seed will be automatically selected through the detection of the celiac artery and the parameters of the method will be re-estimated while the region is growing thorough the aorta. At each phase of region growing the initial center line of aorta will also be identified and modified through the process. Thus the proposed method simultaneously detect aorta and identify its centerline. The method has been applied on CT images from 20 patients with good agreement with the visual assessment by two radiologists.

  16. A novel adaptive biogeochemical model, and its 3-D application for a decadal hindcast simulation of the biogeochemistry of the southern North Sea

    NASA Astrophysics Data System (ADS)

    Kerimoglu, Onur; Hofmeister, Richard; Wirtz, Kai

    2016-04-01

    Adaptation and acclimation processes are often ignored in ecosystem-scale model implementations, despite the long-standing recognition of their importance. Here we present a novel adaptive phytoplankton growth model where acclimation of the community to the changes in external resource ratios is accounted for, using optimality principles and dynamic physiological traits. We show that the model can reproduce the internal stoichiometries obtained at marginal supply ratios in chemostat experiments. The model is applied in a decadal hindcast simulation of the southern North Sea, where it is coupled to a 2-D benthic model and a 3-D hydrodynamic model in an approximately 1.5km horizontal resolution at the German Bight coast. The model is shown to have good skill in capturing the steep, coastal gradients in the German Bight, suggested by the match between the estimated and observed dissolved nutrient and chlorophyll concentrations. We then analyze the differential sensitivity of the coastal and off-shore zones to major drivers of the system, such as riverine nutrient loads. We demonstrate that the relevance of phytoplankton acclimation varies across coastal gradients and can become particularly significant in terms of summer nutrient depletion.

  17. Conference Proceedings of Applications of Mesh Generation to Complex 3-D Configurations Held at the Specialists’ Meeting of the Fluid Dynamics Panel in Leon, Norway on 24th-25th May 1989

    DTIC Science & Technology

    1990-03-01

    following expressions: 0 = (e) ’/ (6a) 1/2 1/2 OL ULI+ P u (6b) I/2 + t)/2( QL eR Ht = P/HtL+ I2 HR (6cl 1/ H 1/2 - 6 Q1 + OR where the total enthalpy...i. Flow data computed on type-2 meshes replaces those computed ol . all meshes lower down the hierarchy. Finally, type-3 meshes head the hierarchy. It...Cotii at I d iscr~ti-a’ioa d ’ut it oIblie aCcolttinua q ui coasisitea trouver une traasfor .aaioa x( () do u u t...it,- de rJfr,tc e ( espae , ( t,) at

  18. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    SciTech Connect

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  19. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  20. TACO3D. 3-D Finite Element Heat Transfer Code

    SciTech Connect

    Mason, W.E.

    1992-03-04

    TACO3D is a three-dimensional, finite-element program for heat transfer analysis. An extension of the two-dimensional TACO program, it can perform linear and nonlinear analyses and can be used to solve either transient or steady-state problems. The program accepts time-dependent or temperature-dependent material properties, and materials may be isotropic or orthotropic. A variety of time-dependent and temperature-dependent boundary conditions and loadings are available including temperature, flux, convection, and radiation boundary conditions and internal heat generation. Additional specialized features treat enclosure radiation, bulk nodes, and master/slave internal surface conditions (e.g., contact resistance). Data input via a free-field format is provided. A user subprogram feature allows for any type of functional representation of any independent variable. A profile (bandwidth) minimization option is available. The code is limited to implicit time integration for transient solutions. TACO3D has no general mesh generation capability. Rows of evenly-spaced nodes and rows of sequential elements may be generated, but the program relies on separate mesh generators for complex zoning. TACO3D does not have the ability to calculate view factors internally. Graphical representation of data in the form of time history and spatial plots is provided through links to the POSTACO and GRAPE postprocessor codes.

  1. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    PubMed

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed.

  2. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina

    PubMed Central

    Zawadzki, Robert J.; Zhang, Pengfei; Zam, Azhar; Miller, Eric B.; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S.; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G.; Werner, John S.; Burns, Marie E.; Pugh, Edward N.

    2015-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  3. Polyhedral shape model for terrain correction of gravity and gravity gradient data based on an adaptive mesh

    NASA Astrophysics Data System (ADS)

    Guo, Zhikui; Chen, Chao; Tao, Chunhui

    2016-04-01

    Since 2007, there are four China Da yang cruises (CDCs), which have been carried out to investigate polymetallic sulfides in the southwest Indian ridge (SWIR) and have acquired both gravity data and bathymetry data on the corresponding survey lines(Tao et al., 2014). Sandwell et al. (2014) published a new global marine gravity model including the free air gravity data and its first order vertical gradient (Vzz). Gravity data and its gradient can be used to extract unknown density structure information(e.g. crust thickness) under surface of the earth, but they contain all the mass effect under the observation point. Therefore, how to get accurate gravity and its gradient effect of the existing density structure (e.g. terrain) has been a key issue. Using the bathymetry data or ETOPO1 (http://www.ngdc.noaa.gov/mgg/global/global.html) model at a full resolution to calculate the terrain effect could spend too much computation time. We expect to develop an effective method that takes less time but can still yield the desired accuracy. In this study, a constant-density polyhedral model is used to calculate the gravity field and its vertical gradient, which is based on the work of Tsoulis (2012). According to gravity field attenuation with distance and variance of bathymetry, we present an adaptive mesh refinement and coarsening strategies to merge both global topography data and multi-beam bathymetry data. The local coarsening or size of mesh depends on user-defined accuracy and terrain variation (Davis et al., 2011). To depict terrain better, triangular surface element and rectangular surface element are used in fine and coarse mesh respectively. This strategy can also be applied to spherical coordinate in large region and global scale. Finally, we applied this method to calculate Bouguer gravity anomaly (BGA), mantle Bouguer anomaly(MBA) and their vertical gradient in SWIR. Further, we compared the result with previous results in the literature. Both synthetic model

  4. 3D Printing: 3D Printing of Highly Stretchable and Tough Hydrogels into Complex, Cellularized Structures.

    PubMed

    Hong, Sungmin; Sycks, Dalton; Chan, Hon Fai; Lin, Shaoting; Lopez, Gabriel P; Guilak, Farshid; Leong, Kam W; Zhao, Xuanhe

    2015-07-15

    X. Zhao and co-workers develop on page 4035 a new biocompatible hydrogel system that is extremely tough and stretchable and can be 3D printed into complex structures, such as the multilayer mesh shown. Cells encapsulated in the tough and printable hydrogel maintain high viability. 3D-printed structures of the tough hydrogel can sustain high mechanical loads and deformations.

  5. Lyapunov exponents and adaptive mesh refinement for high-speed flows using a discontinuous Galerkin scheme

    NASA Astrophysics Data System (ADS)

    Moura, R. C.; Silva, A. F. C.; Bigarella, E. D. V.; Fazenda, A. L.; Ortega, M. A.

    2016-08-01

    This paper proposes two important improvements to shock-capturing strategies using a discontinuous Galerkin scheme, namely, accurate shock identification via finite-time Lyapunov exponent (FTLE) operators and efficient shock treatment through a point-implicit discretization of a PDE-based artificial viscosity technique. The advocated approach is based on the FTLE operator, originally developed in the context of dynamical systems theory to identify certain types of coherent structures in a flow. We propose the application of FTLEs in the detection of shock waves and demonstrate the operator's ability to identify strong and weak shocks equally well. The detection algorithm is coupled with a mesh refinement procedure and applied to transonic and supersonic flows. While the proposed strategy can be used potentially with any numerical method, a high-order discontinuous Galerkin solver is used in this study. In this context, two artificial viscosity approaches are employed to regularize the solution near shocks: an element-wise constant viscosity technique and a PDE-based smooth viscosity model. As the latter approach is more sophisticated and preferable for complex problems, a point-implicit discretization in time is proposed to reduce the extra stiffness introduced by the PDE-based technique, making it more competitive in terms of computational cost.

  6. Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution

    SciTech Connect

    Chacon De La Rosa, Luis; Delzanno, Gian Luca; Finn, John M.

    2011-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of this class of techniques has been the formulation of robust, reliable mesh-motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1] and [2], the advantages of this approach with regard to these points have been demonstrated for the time-independent case. In this study, we demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time-stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh-motion approaches, without resorting to ad hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3] and [4]). We explore two distinct r-refinement implementations of MK: the direct method, where the current mesh relates to an initial, unchanging mesh, and the sequential method, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior with regard to mesh distortion and robustness. The properties of the approach are illustrated with a hyperbolic PDE, the advection of a passive scalar, in 2D and 3D. Velocity flow fields with and without flow shear are considered. Three-dimensional grid, time-step, and nonlinear tolerance convergence studies are presented which demonstrate the optimality of the approach.

  7. Beam Optics Analysis - An Advanced 3D Trajectory Code

    SciTech Connect

    Ives, R. Lawrence; Bui, Thuc; Vogler, William; Neilson, Jeff; Read, Mike; Shephard, Mark; Bauer, Andrew; Datta, Dibyendu; Beal, Mark

    2006-01-03

    Calabazas Creek Research, Inc. has completed initial development of an advanced, 3D program for modeling electron trajectories in electromagnetic fields. The code is being used to design complex guns and collectors. Beam Optics Analysis (BOA) is a fully relativistic, charged particle code using adaptive, finite element meshing. Geometrical input is imported from CAD programs generating ACIS-formatted files. Parametric data is inputted using an intuitive, graphical user interface (GUI), which also provides control of convergence, accuracy, and post processing. The program includes a magnetic field solver, and magnetic information can be imported from Maxwell 2D/3D and other programs. The program supports thermionic emission and injected beams. Secondary electron emission is also supported, including multiple generations. Work on field emission is in progress as well as implementation of computer optimization of both the geometry and operating parameters. The principle features of the program and its capabilities are presented.

  8. Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion

    NASA Astrophysics Data System (ADS)

    Seco de Herrera, Alba G.; Foncubierta-Rodríguez, Antonio; Müller, Henning

    2015-03-01

    Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search. The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results. In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval. Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.

  9. Total enthalpy-based lattice Boltzmann method with adaptive mesh refinement for solid-liquid phase change

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2016-06-01

    A total enthalpy-based lattice Boltzmann (LB) method with adaptive mesh refinement (AMR) is developed in this paper to efficiently simulate solid-liquid phase change problem where variables vary significantly near the phase interface and thus finer grid is required. For the total enthalpy-based LB method, the velocity field is solved by an incompressible LB model with multiple-relaxation-time (MRT) collision scheme, and the temperature field is solved by a total enthalpy-based MRT LB model with the phase interface effects considered and the deviation term eliminated. With a kinetic assumption that the density distribution function for solid phase is at equilibrium state, a volumetric LB scheme is proposed to accurately realize the nonslip velocity condition on the diffusive phase interface and in the solid phase. As compared with the previous schemes, this scheme can avoid nonphysical flow in the solid phase. As for the AMR approach, it is developed based on multiblock grids. An indicator function is introduced to control the adaptive generation of multiblock grids, which can guarantee the existence of overlap area between adjacent blocks for information exchange. Since MRT collision schemes are used, the information exchange is directly carried out in the moment space. Numerical tests are firstly performed to validate the strict satisfaction of the nonslip velocity condition, and then melting problems in a square cavity with different Prandtl numbers and Rayleigh numbers are simulated, which demonstrate that the present method can handle solid-liquid phase change problem with high efficiency and accuracy.

  10. A parallel second-order adaptive mesh algorithm for incompressible flow in porous media.

    PubMed

    Pau, George S H; Almgren, Ann S; Bell, John B; Lijewski, Michael J

    2009-11-28

    In this paper, we present a second-order accurate adaptive algorithm for solving multi-phase, incompressible flow in porous media. We assume a multi-phase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting, the total velocity, defined to be the sum of the phase velocities, is divergence free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids are advanced multiple steps to reach the same time as the coarse grids and the data at different levels are then synchronized. The single-grid algorithm is described briefly, but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behaviour of the method.

  11. A Parallel Second-Order Adaptive Mesh Algorithm for Incompressible Flow in Porous Media

    SciTech Connect

    Pau, George Shu Heng; Almgren, Ann S.; Bell, John B.; Lijewski, Michael J.

    2008-04-01

    In this paper we present a second-order accurate adaptive algorithm for solving multiphase, incompressible flows in porous media. We assume a multiphase form of Darcy's law with relative permeabilities given as a function of the phase saturation. The remaining equations express conservation of mass for the fluid constituents. In this setting the total velocity, defined to be the sum of the phase velocities, is divergence-free. The basic integration method is based on a total-velocity splitting approach in which we solve a second-order elliptic pressure equation to obtain a total velocity. This total velocity is then used to recast component conservation equations as nonlinear hyperbolic equations. Our approach to adaptive refinement uses a nested hierarchy of logically rectangular grids with simultaneous refinement of the grids in both space and time. The integration algorithm on the grid hierarchy is a recursive procedure in which coarse grids are advanced in time, fine grids areadvanced multiple steps to reach the same time as the coarse grids and the data atdifferent levels are then synchronized. The single grid algorithm is described briefly,but the emphasis here is on the time-stepping procedure for the adaptive hierarchy. Numerical examples are presented to demonstrate the algorithm's accuracy and convergence properties and to illustrate the behavior of the method.

  12. Clinical outcome of protocol based image (MRI) guided adaptive brachytherapy combined with 3D conformal radiotherapy with or without chemotherapy in patients with locally advanced cervical cancer

    PubMed Central

    Pötter, Richard; Georg, Petra; Dimopoulos, Johannes C.A.; Grimm, Magdalena; Berger, Daniel; Nesvacil, Nicole; Georg, Dietmar; Schmid, Maximilian P.; Reinthaller, Alexander; Sturdza, Alina; Kirisits, Christian

    2011-01-01

    Background To analyse the overall clinical outcome and benefits by applying protocol based image guided adaptive brachytherapy combined with 3D conformal external beam radiotherapy (EBRT) ± chemotherapy (ChT). Methods Treatment schedule was EBRT with 45–50.4 Gy ± concomitant cisplatin chemotherapy plus 4 × 7 Gy High Dose Rate (HDR) brachytherapy. Patients were treated in the “protocol period” (2001–2008) with the prospective application of the High Risk CTV concept (D90) and dose volume constraints for organs at risk including biological modelling. Dose volume adaptation was performed with the aim of dose escalation in large tumours (prescribed D90 > 85 Gy), often with inserting additional interstitial needles. Dose volume constraints (D2cc) were 70–75 Gy for rectum and sigmoid and 90 Gy for bladder. Late morbidity was prospectively scored, using LENT/SOMA Score. Disease outcome and treatment related late morbidity were evaluated and compared using actuarial analysis. Findings One hundred and fifty-six consecutive patients (median age 58 years) with cervix cancer FIGO stages IB–IVA were treated with definitive radiotherapy in curative intent. Histology was squamous cell cancer in 134 patients (86%), tumour size was >5 cm in 103 patients (66%), lymph node involvement in 75 patients (48%). Median follow-up was 42 months for all patients. Interstitial techniques were used in addition to intracavitary brachytherapy in 69/156 (44%) patients. Total prescribed mean dose (D90) was 93 ± 13 Gy, D2cc 86 ± 17 Gy for bladder, 65 ± 9 Gy for rectum and 64 ± 9 Gy for sigmoid. Complete remission was achieved in 151/156 patients (97%). Overall local control at 3 years was 95%; 98% for tumours 2–5 cm, and 92% for tumours >5 cm (p = 0.04), 100% for IB, 96% for IIB, 86% for IIIB. Cancer specific survival at 3 years was overall 74%, 83% for tumours 2–5 cm, 70% for tumours >5 cm, 83% for IB, 84% for IIB, 52% for IIIB. Overall

  13. Beowulf 3D: a case study

    NASA Astrophysics Data System (ADS)

    Engle, Rob

    2008-02-01

    This paper discusses the creative and technical challenges encountered during the production of "Beowulf 3D," director Robert Zemeckis' adaptation of the Old English epic poem and the first film to be simultaneously released in IMAX 3D and digital 3D formats.

  14. Adaptivity via mesh movement with three-dimensional block-structured grids

    SciTech Connect

    Catherall, D.

    1996-12-31

    The method described here is one in which grid nodes are redistributed so that they are attracted towards regions of high solution activity. The major difficulty in attempting this arises from the degree of grid smoothness and orthogonality required by the flow solver. These requirements are met by suitable choice of grid equations, to be satisfied by the adapted grid, and by the inclusion of certain source terms, for added control in regions where grid movement is limited by the local geometry. The method has been coded for multiblock grids, so that complex configurations may be treated. It is demonstrated here for inviscid supercritical flow with two test cases: an ONERA M6 wing with a rounded tip, and a forward-swept wing/fuselage configuration (M151).

  15. Documentation for MeshKit - Reactor Geometry (&mesh) Generator

    SciTech Connect

    Jain, Rajeev; Mahadevan, Vijay

    2015-09-30

    This report gives documentation for using MeshKit’s Reactor Geometry (and mesh) Generator (RGG) GUI and also briefly documents other algorithms and tools available in MeshKit. RGG is a program designed to aid in modeling and meshing of complex/large hexagonal and rectilinear reactor cores. RGG uses Argonne’s SIGMA interfaces, Qt and VTK to produce an intuitive user interface. By integrating a 3D view of the reactor with the meshing tools and combining them into one user interface, RGG streamlines the task of preparing a simulation mesh and enables real-time feedback that reduces accidental scripting mistakes that could waste hours of meshing. RGG interfaces with MeshKit tools to consolidate the meshing process, meaning that going from model to mesh is as easy as a button click. This report is designed to explain RGG v 2.0 interface and provide users with the knowledge and skills to pilot RGG successfully. Brief documentation of MeshKit source code, tools and other algorithms available are also presented for developers to extend and add new algorithms to MeshKit. RGG tools work in serial and parallel and have been used to model complex reactor core models consisting of conical pins, load pads, several thousands of axially varying material properties of instrumentation pins and other interstices meshes.

  16. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  17. 3D image-based adapted high-dose-rate brachytherapy in cervical cancer with and without interstitial needles: measurement of applicator shift between imaging and dose delivery

    PubMed Central

    Thunberg, Per; With, Anders; Mordhorst, Louise Bohr; Persliden, Jan

    2017-01-01

    Purpose Using 3D image-guided adaptive brachytherapy for cervical cancer treatment, it often means that patients are transported and moved during the treatment procedure. The purpose of this study was to determine the intra-fractional longitudinal applicator shift in relation to the high risk clinical target volume (HR-CTV) by comparing geometries at imaging and dose delivery for patients with and without needles. Material and methods Measurements were performed in 33 patients (71 fractions), where 25 fractions were without and 46 were with interstitial needles. Gold markers were placed in the lower part of the cervix as a surrogate for HR-CTV, enabling distance measurements between HR-CTV and the ring applicator. Shifts of the applicator relative to the markers were determined using planning computed tomography (CT) images used for planning, and the radiographs obtained at dose delivery. Differences in the physical D90 for HR-CTV due to applicator shifts were simulated individually in the treatment planning system to provide the relative dose variation. Results The maximum distances of the applicator shifts, in relation to the markers, were 3.6 mm (caudal), and –2.5 mm (cranial). There was a significant displacement of –0.7 mm (SD = 0.9 mm) without needles, while with needles there was no significant shift. The relative dose variation showed a significant increase in D90 HR-CTV of 1.6% (SD = 2.6%) when not using needles, and no significant dose variation was found when using needles. Conclusions The results from this study showed that there was a small longitudinal displacement of the ring applicator and a significant difference in displacement between using interstitial needles or not. PMID:28344604

  18. Europeana and 3D

    NASA Astrophysics Data System (ADS)

    Pletinckx, D.

    2011-09-01

    The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  19. Temperature structure of the intracluster medium from smoothed-particle hydrodynamics and adaptive-mesh refinement simulations

    SciTech Connect

    Rasia, Elena; Lau, Erwin T.; Nagai, Daisuke; Avestruz, Camille; Borgani, Stefano; Dolag, Klaus; Granato, Gian Luigi; Murante, Giuseppe; Ragone-Figueroa, Cinthia; Mazzotta, Pasquale; Nelson, Kaylea

    2014-08-20

    Analyses of cosmological hydrodynamic simulations of galaxy clusters suggest that X-ray masses can be underestimated by 10%-30%. The largest bias originates from both violation of hydrostatic equilibrium (HE) and an additional temperature bias caused by inhomogeneities in the X-ray-emitting intracluster medium (ICM). To elucidate this large dispersion among theoretical predictions, we evaluate the degree of temperature structures in cluster sets simulated either with smoothed-particle hydrodynamics (SPH) or adaptive-mesh refinement (AMR) codes. We find that the SPH simulations produce larger temperature variations connected to the persistence of both substructures and their stripped cold gas. This difference is more evident in nonradiative simulations, whereas it is reduced in the presence of radiative cooling. We also find that the temperature variation in radiative cluster simulations is generally in agreement with that observed in the central regions of clusters. Around R {sub 500} the temperature inhomogeneities of the SPH simulations can generate twice the typical HE mass bias of the AMR sample. We emphasize that a detailed understanding of the physical processes responsible for the complex thermal structure in ICM requires improved resolution and high-sensitivity observations in order to extend the analysis to higher temperature systems and larger cluster-centric radii.

  20. Spherical mesh adaptive direct search for separating quasi-uncorrelated sources by range-based independent component analysis.

    PubMed

    Selvan, S Easter; Borckmans, Pierre B; Chattopadhyay, A; Absil, P-A

    2013-09-01

    It is seemingly paradoxical to the classical definition of the independent component analysis (ICA), that in reality, the true sources are often not strictly uncorrelated. With this in mind, this letter concerns a framework to extract quasi-uncorrelated sources with finite supports by optimizing a range-based contrast function under unit-norm constraints (to handle the inherent scaling indeterminacy of ICA) but without orthogonality constraints. Albeit the appealing contrast properties of the range-based function (e.g., the absence of mixing local optima), the function is not differentiable everywhere. Unfortunately, there is a dearth of literature on derivative-free optimizers that effectively handle such a nonsmooth yet promising contrast function. This is the compelling reason for the design of a nonsmooth optimization algorithm on a manifold of matrices having unit-norm columns with the following objectives: to ascertain convergence to a Clarke stationary point of the contrast function and adhere to the necessary unit-norm constraints more naturally. The proposed nonsmooth optimization algorithm crucially relies on the design and analysis of an extension of the mesh adaptive direct search (MADS) method to handle locally Lipschitz objective functions defined on the sphere. The applicability of the algorithm in the ICA domain is demonstrated with simulations involving natural, face, aerial, and texture images.

  1. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  2. An Improved Version of TOPAZ 3D

    SciTech Connect

    Krasnykh, Anatoly

    2003-07-29

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results.

  3. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. 3d-3d correspondence revisited

    DOE PAGES

    Chung, Hee -Joong; Dimofte, Tudor; Gukov, Sergei; ...

    2016-04-21

    In fivebrane compactifications on 3-manifolds, we point out the importance of all flat connections in the proper definition of the effective 3d N = 2 theory. The Lagrangians of some theories with the desired properties can be constructed with the help of homological knot invariants that categorify colored Jones polynomials. Higgsing the full 3d theories constructed this way recovers theories found previously by Dimofte-Gaiotto-Gukov. As a result, we also consider the cutting and gluing of 3-manifolds along smooth boundaries and the role played by all flat connections in this operation.

  6. Three-dimensional modeling of a thermal dendrite using the phase field method with automatic anisotropic and unstructured adaptive finite element meshing

    NASA Astrophysics Data System (ADS)

    Sarkis, C.; Silva, L.; Gandin, Ch-A.; Plapp, M.

    2016-03-01

    Dendritic growth is computed with automatic adaptation of an anisotropic and unstructured finite element mesh. The energy conservation equation is formulated for solid and liquid phases considering an interface balance that includes the Gibbs-Thomson effect. An equation for a diffuse interface is also developed by considering a phase field function with constant negative value in the liquid and constant positive value in the solid. Unknowns are the phase field function and a dimensionless temperature, as proposed by [1]. Linear finite element interpolation is used for both variables, and discretization stabilization techniques ensure convergence towards a correct non-oscillating solution. In order to perform quantitative computations of dendritic growth on a large domain, two additional numerical ingredients are necessary: automatic anisotropic unstructured adaptive meshing [2,[3] and parallel implementations [4], both made available with the numerical platform used (CimLib) based on C++ developments. Mesh adaptation is found to greatly reduce the number of degrees of freedom. Results of phase field simulations for dendritic solidification of a pure material in two and three dimensions are shown and compared with reference work [1]. Discussion on algorithm details and the CPU time will be outlined.

  7. View-dependent progressive mesh coding for graphic streaming

    NASA Astrophysics Data System (ADS)

    Yang, Sheng; Kim, Chang-Su; Kuo, C.-C. Jay

    2001-11-01

    A view-dependent progressive mesh (VDPM) coding algorithm is proposed in this research to facilitate interactive 3D graphics streaming and browsing. The proposed algorithm splits a 3D graphics model into several partitions, progressively compresses each partition, and reorganizes topological and geometrical data to enable the transmission of visible parts with a higher priority. With the real-time streaming protocol (RTSP), the server is informed of the viewing parameters before transmission. Then, the server can adaptively transmit visible parts in detail, while cutting off invisible parts. Experimental results demonstrate that the proposed algorithm reduces the required transmission bandwidth, and exhibits acceptable visual quality even at low bit rates.

  8. Characterization of the non-uniqueness of used nuclear fuel burnup signatures through a Mesh-Adaptive Direct Search

    NASA Astrophysics Data System (ADS)

    Skutnik, Steven E.; Davis, David R.

    2016-05-01

    The use of passive gamma and neutron signatures from fission indicators is a common means of estimating used fuel burnup, enrichment, and cooling time. However, while characteristic fission product signatures such as 134Cs, 137Cs, 154Eu, and others are generally reliable estimators for used fuel burnup within the context where the assembly initial enrichment and the discharge time are known, in the absence of initial enrichment and/or cooling time information (such as when applying NDA measurements in a safeguards/verification context), these fission product indicators no longer yield a unique solution for assembly enrichment, burnup, and cooling time after discharge. Through the use of a new Mesh-Adaptive Direct Search (MADS) algorithm, it is possible to directly probe the shape of this "degeneracy space" characteristic of individual nuclides (and combinations thereof), both as a function of constrained parameters (such as the assembly irradiation history) and unconstrained parameters (e.g., the cooling time before measurement and the measurement precision for particular indicator nuclides). In doing so, this affords the identification of potential means of narrowing the uncertainty space of potential assembly enrichment, burnup, and cooling time combinations, thereby bounding estimates of assembly plutonium content. In particular, combinations of gamma-emitting nuclides with distinct half-lives (e.g., 134Cs with 137Cs and 154Eu) in conjunction with gross neutron counting (via 244Cm) are able to reasonably constrain the degeneracy space of possible solutions to a space small enough to perform useful discrimination and verification of fuel assemblies based on their irradiation history.

  9. Moving Overlapping Grids with Adaptive Mesh Refinement for High-Speed Reactive and Non-reactive Flow

    SciTech Connect

    Henshaw, W D; Schwendeman, D W

    2005-08-30

    We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows in order to demonstrate the use and accuracy of the numerical approach.

  10. Adaptive mesh refinement for singular structures in incompressible MHD and compressible Hall-MHD with electron and ion inertia

    NASA Astrophysics Data System (ADS)

    Grauer, R.; Germaschewski, K.

    The goal of this presentation is threefold. First, the role of singular structures like shocks, vortex tubes and current sheets for understanding intermittency in small scale turbulence is demonstrated. Secondly, in order to investigate the time evolution of singular structures, effective numerical techniques have to be applied, like block structured adaptive mesh refinement combined with recent advances in treating hyperbolic equations. And thirdly, the developed numerical techniques can perfectly be applied to the question of fast reconnection demonstrated by the example of compressible Hall-MHD including electron and ion inertia. 1 Why is it worth studying singular structures? The motivation for studying singular structures has several sources. In turbulent fluid and plasma flows the formation of nearly singular structures like shocks, vortex tubes or current sheets provide an effective mechanism to transport energy from large to small scales. In the last years it has become clear that the nature of the singular structures is a key feature of small scale intermittency. In a phenomenological way this is established in She-Leveque like models (She and Leveque, 1994; Grauer, Krug and Marliani, 1994; Politano and Pouquet, 1995; M¨uller and Biskamp, 2000), which are able to describe some of the scaling properties of high order structure functions. An additional source which highlights the importance of singular structures originates from studies of a toy model of turbulence, the so-called Burgers turbulence. The very left tail of the probability distribution of velocity increments can be calculated using the instanton approach (Balkovsky, Falkovich, Kolokolov and Lebedev, 1997). Here it is interesting to note that the main contribution in the relevant path integral stems from the the singular structures which are shocks in the burgers turbulence. From a mathematical point of view the question whether

  11. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  12. Robust, multidimensional mesh motion based on Monge-Kantorovich equidistribution

    SciTech Connect

    Delzanno, G L; Finn, J M

    2009-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of the technique has been the formulation of robust, reliable mesh motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1, 2], the advantages of this approach in regards to these points have been demonstrated for the time-independent case. In this study, demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh motion approaches, without resorting to ad-hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3, 4]). We explore two distinct r-refinement implementations of MK: direct, where the current mesh relates to an initial, unchanging mesh, and sequential, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior in regards to mesh distortion and robustness. The properties of the approach are illustrated with a paradigmatic hyperbolic PDE, the advection of a passive scalar. Imposed velocity flow fields or varying vorticity levels and flow shears are considered.

  13. A new Control Volume Finite Element Method with Discontinuous Pressure Representation for Multi-phase Flow with Implicit Adaptive time Integration and Dynamic Unstructured mesh Optimization

    NASA Astrophysics Data System (ADS)

    Salinas, Pablo; Pavlidis, Dimitrios; Percival, James; Adam, Alexander; Xie, Zhihua; Pain, Christopher; Jackson, Matthew

    2015-11-01

    We present a new, high-order, control-volume-finite-element (CVFE) method with discontinuous representation for pressure and velocity to simulate multiphase flow in heterogeneous porous media. Time is discretized using an adaptive, fully implicit method. Heterogeneous geologic features are represented as volumes bounded by surfaces. Our approach conserves mass and does not require the use of CVs that span domain boundaries. Computational efficiency is increased by use of dynamic mesh optimization. We demonstrate that the approach, amongst other features, accurately preserves sharp saturation changes associated with high aspect ratio geologic domains, allowing efficient simulation of flow in highly heterogeneous models. Moreover, accurate solutions are obtained at lower cost than an equivalent fine, fixed mesh and conventional CVFE methods. The use of implicit time integration allows the method to efficiently converge using highly anisotropic meshes without having to reduce the time-step. The work is significant for two key reasons. First, it resolves a long-standing problem associated with the use of classical CVFE methods. Second, it reduces computational cost/increases solution accuracy through the use of dynamic mesh optimization and time-stepping with large Courant number. Funding for Dr P. Salinas from ExxonMobil is gratefully acknowledged.

  14. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  15. Design, Implementation and Applications of 3d Web-Services in DB4GEO

    NASA Astrophysics Data System (ADS)

    Breunig, M.; Kuper, P. V.; Dittrich, A.; Wild, P.; Butwilowski, E.; Al-Doori, M.

    2013-09-01

    The object-oriented database architecture DB4GeO was originally designed to support sub-surface applications in the geo-sciences. This is reflected in DB4GeO's geometric data model as well as in its import and export functions. Initially, these functions were designed for communication with 3D geological modeling and visualization tools such as GOCAD or MeshLab. However, it soon became clear that DB4GeO was suitable for a much wider range of applications. Therefore it is natural to move away from a standalone solution and to open the access to DB4GeO data by standardized OGC web-services. Though REST and OGC services seem incompatible at first sight, the implementation in DB4GeO shows that OGC-based implementation of web-services may use parts of the DB4GeO-REST implementation. Starting with initial solutions in the history of DB4GeO, this paper will introduce the design, adaptation (i.e. model transformation), and first steps in the implementation of OGC Web Feature (WFS) and Web Processing Services (WPS), as new interfaces to DB4GeO data and operations. Among its capabilities, DB4GeO can provide data in different data formats like GML, GOCAD, or DB3D XML through a WFS, as well as its ability to run operations like a 3D-to-2D service, or mesh-simplification (Progressive Meshes) through a WPS. We then demonstrate, an Android-based mobile 3D augmented reality viewer for DB4GeO that uses the Web Feature Service to visualize 3D geo-database query results. Finally, we explore future research work considering DB4GeO in the framework of the research group "Computer-Aided Collaborative Subway Track Planning in Multi-Scale 3D City and Building Models".

  16. Multi-dimensional Upwind Fluctuation Splitting Scheme with Mesh Adaption for Hypersonic Viscous Flow. Degree awarded by Virginia Polytechnic Inst. and State Univ., 9 Nov. 2001

    NASA Technical Reports Server (NTRS)

    Wood, William A., III

    2002-01-01

    A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

  17. Cosmology on a Mesh

    NASA Astrophysics Data System (ADS)

    Gill, Stuart P. D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.

    2003-04-01

    An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological "market" today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.

  18. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  19. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  20. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  1. M3D project for simulation studies of plasmas

    SciTech Connect

    Park, W.; Belova, E.V.; Fu, G.Y.; Strauss, H.R.; Sugiyama, L.E.

    1998-12-31

    The M3D (Multi-level 3D) project carries out simulation studies of plasmas of various regimes using multi-levels of physics, geometry, and mesh schemes in one code package. This paper and papers by Strauss, Sugiyama, and Belova in this workshop describe the project, and present examples of current applications. The currently available physics models of the M3D project are MHD, two-fluids, gyrokinetic hot particle/MHD hybrid, and gyrokinetic particle ion/two-fluid hybrid models. The code can be run with both structured and unstructured meshes.

  2. Adaptive observation in the South China Sea using CNOP approach based on a 3-D ocean circulation model and its adjoint model

    NASA Astrophysics Data System (ADS)

    Li, Yineng; Peng, Shiqiu; Liu, Duanling

    2014-12-01

    This study investigates the effect of adaptive (or targeted) observation on improving the midrange (30 days) forecast skill of ocean state of the South China Sea (SCS). A region associated with the South China Sea Western Boundary Current (SCSWBC) is chosen as the "target" of the adaptive observation. The Conditional Nonlinear Optimal Perturbation (CNOP) approach is applied to a three-dimensional ocean model and its adjoint model for determining the sensitive region. Results show that the initial errors in the sensitive region determined by the CNOP approach have significant impacts on the forecast of ocean state in the target region; thus, reducing these initial errors through adaptive observation can lead to a better 30 day prediction of ocean state in the target region. Our results suggest that implementing adaptive observation is an effective and cost-saving way to improve an ocean model's forecast skill over the SCS.

  3. AE3D

    SciTech Connect

    Spong, Donald A

    2016-06-20

    AE3D solves for the shear Alfven eigenmodes and eigenfrequencies in a torodal magnetic fusion confinement device. The configuration can be either 2D (e.g. tokamak, reversed field pinch) or 3D (e.g. stellarator, helical reversed field pinch, tokamak with ripple). The equations solved are based on a reduced MHD model and sound wave coupling effects are not currently included.

  4. Robust, multidimensional mesh-motion based on Monge-Kantorovich equidistribution

    NASA Astrophysics Data System (ADS)

    Chacón, L.; Delzanno, G. L.; Finn, J. M.

    2011-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of this class of techniques has been the formulation of robust, reliable mesh-motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1,2], the advantages of this approach with regard to these points have been demonstrated for the time-independent case. In this study, we demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time-stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh-motion approaches, without resorting to ad hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3,4]). We explore two distinct r-refinement implementations of MK: the direct method, where the current mesh relates to an initial, unchanging mesh, and the sequential method, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior with regard to mesh distortion and robustness. The properties of the approach are illustrated with a hyperbolic PDE, the advection of a passive scalar, in 2D and 3D. Velocity flow fields with and without flow shear are considered. Three-dimensional grid, time-step, and nonlinear tolerance convergence studies are presented which demonstrate the optimality of the approach.

  5. Robust, multidimensional mesh-motion based on Monge-Kantorovich equidistribution

    SciTech Connect

    Chacon De La Rosa, Luis; Delzanno, Gian Luca; Finn, John M.

    2011-01-01

    Mesh-motion (r-refinement) grid adaptivity schemes are attractive due to their potential to minimize the numerical error for a prescribed number of degrees of freedom. However, a key roadblock to a widespread deployment of this class of techniques has been the formulation of robust, reliable mesh-motion governing principles, which (1) guarantee a solution in multiple dimensions (2D and 3D), (2) avoid grid tangling (or folding of the mesh, whereby edges of a grid cell cross somewhere in the domain), and (3) can be solved effectively and efficiently. In this study, we formulate such a mesh-motion governing principle, based on volume equidistribution via Monge-Kantorovich optimization (MK). In earlier publications [1,2], the advantages of this approach with regard to these points have been demonstrated for the time-independent case. In this study, we demonstrate that Monge-Kantorovich equidistribution can in fact be used effectively in a time-stepping context, and delivers an elegant solution to the otherwise pervasive problem of grid tangling in mesh-motion approaches, without resorting to ad hoc time-dependent terms (as in moving-mesh PDEs, or MMPDEs [3,4]). We explore two distinct r-refinement implementations of MK: the direct method, where the current mesh relates to an initial, unchanging mesh, and the sequential method, where the current mesh is related to the previous one in time. We demonstrate that the direct approach is superior with regard to mesh distortion and robustness. The properties of the approach are illustrated with a hyperbolic PDE, the advection of a passive scalar, in 20 and 3D. Velocity flow fields with and without flow shear are considered. Three-dimensional grid, time-step, and nonlinear tolerance convergence studies are presented which demonstrate the optimality of the approach.

  6. Unlocking the scientific potential of complex 3D point cloud dataset : new classification and 3D comparison methods

    NASA Astrophysics Data System (ADS)

    Lague, D.; Brodu, N.; Leroux, J.

    2012-12-01

    Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi

  7. 3-D Seismic Interpretation

    NASA Astrophysics Data System (ADS)

    Moore, Gregory F.

    2009-05-01

    This volume is a brief introduction aimed at those who wish to gain a basic and relatively quick understanding of the interpretation of three-dimensional (3-D) seismic reflection data. The book is well written, clearly illustrated, and easy to follow. Enough elementary mathematics are presented for a basic understanding of seismic methods, but more complex mathematical derivations are avoided. References are listed for readers interested in more advanced explanations. After a brief introduction, the book logically begins with a succinct chapter on modern 3-D seismic data acquisition and processing. Standard 3-D acquisition methods are presented, and an appendix expands on more recent acquisition techniques, such as multiple-azimuth and wide-azimuth acquisition. Although this chapter covers the basics of standard time processing quite well, there is only a single sentence about prestack depth imaging, and anisotropic processing is not mentioned at all, even though both techniques are now becoming standard.

  8. Radiochromic 3D Detectors

    NASA Astrophysics Data System (ADS)

    Oldham, Mark

    2015-01-01

    Radiochromic materials exhibit a colour change when exposed to ionising radiation. Radiochromic film has been used for clinical dosimetry for many years and increasingly so recently, as films of higher sensitivities have become available. The two principle advantages of radiochromic dosimetry include greater tissue equivalence (radiologically) and the lack of requirement for development of the colour change. In a radiochromic material, the colour change arises direct from ionising interactions affecting dye molecules, without requiring any latent chemical, optical or thermal development, with important implications for increased accuracy and convenience. It is only relatively recently however, that 3D radiochromic dosimetry has become possible. In this article we review recent developments and the current state-of-the-art of 3D radiochromic dosimetry, and the potential for a more comprehensive solution for the verification of complex radiation therapy treatments, and 3D dose measurement in general.

  9. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out

  10. Bootstrapping 3D fermions

    DOE PAGES

    Iliesiu, Luca; Kos, Filip; Poland, David; ...

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  11. Bootstrapping 3D fermions

    SciTech Connect

    Iliesiu, Luca; Kos, Filip; Poland, David; Pufu, Silviu S.; Simmons-Duffin, David; Yacoby, Ran

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions <ψψψψ> in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ × ψ OPE, and also on the central charge CT. We observe features in our bounds that coincide with scaling dimensions in the GrossNeveu models at large N. Finally, we also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  12. 3D modeling to characterize lamina cribrosa surface and pore geometries using in vivo images from normal and glaucomatous eyes.

    PubMed

    Sredar, Nripun; Ivers, Kevin M; Queener, Hope M; Zouridakis, George; Porter, Jason

    2013-07-01

    En face adaptive optics scanning laser ophthalmoscope (AOSLO) images of the anterior lamina cribrosa surface (ALCS) represent a 2D projected view of a 3D laminar surface. Using spectral domain optical coherence tomography images acquired in living monkey eyes, a thin plate spline was used to model the ALCS in 3D. The 2D AOSLO images were registered and projected onto the 3D surface that was then tessellated into a triangular mesh to characterize differences in pore geometry between 2D and 3D images. Following 3D transformation of the anterior laminar surface in 11 normal eyes, mean pore area increased by 5.1 ± 2.0% with a minimal change in pore elongation (mean change = 0.0 ± 0.2%). These small changes were due to the relatively flat laminar surfaces inherent in normal eyes (mean radius of curvature = 3.0 ± 0.5 mm). The mean increase in pore area was larger following 3D transformation in 4 glaucomatous eyes (16.2 ± 6.0%) due to their more steeply curved laminar surfaces (mean radius of curvature = 1.3 ± 0.1 mm), while the change in pore elongation was comparable to that in normal eyes (-0.2 ± 2.0%). This 3D transformation and tessellation method can be used to better characterize and track 3D changes in laminar pore and surface geometries in glaucoma.

  13. Venus in 3D

    NASA Technical Reports Server (NTRS)

    Plaut, Jeffrey J.

    1993-01-01

    Stereographic images of the surface of Venus which enable geologists to reconstruct the details of the planet's evolution are discussed. The 120-meter resolution of these 3D images make it possible to construct digital topographic maps from which precise measurements can be made of the heights, depths, slopes, and volumes of geologic structures.

  14. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  15. 3D Building Models Segmentation Based on K-Means++ Cluster Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Mao, B.

    2016-10-01

    3D mesh model segmentation is drawing increasing attentions from digital geometry processing field in recent years. The original 3D mesh model need to be divided into separate meaningful parts or surface patches based on certain standards to support reconstruction, compressing, texture mapping, model retrieval and etc. Therefore, segmentation is a key problem for 3D mesh model segmentation. In this paper, we propose a method to segment Collada (a type of mesh model) 3D building models into meaningful parts using cluster analysis. Common clustering methods segment 3D mesh models by K-means, whose performance heavily depends on randomized initial seed points (i.e., centroid) and different randomized centroid can get quite different results. Therefore, we improved the existing method and used K-means++ clustering algorithm to solve this problem. Our experiments show that K-means++ improves both the speed and the accuracy of K-means, and achieve good and meaningful results.

  16. Multigrid techniques for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  17. Adaptive-weighted cubic B-spline using lookup tables for fast and efficient axial resampling of 3D confocal microscopy images.

    PubMed

    Indhumathi, C; Cai, Y Y; Guan, Y Q; Opas, M; Zheng, J

    2012-01-01

    Confocal laser scanning microscopy has become a most powerful tool to visualize and analyze the dynamic behavior of cellular molecules. Photobleaching of fluorochromes is a major problem with confocal image acquisition that will lead to intensity attenuation. Photobleaching effect can be reduced by optimizing the collection efficiency of the confocal image by fast z-scanning. However, such images suffer from distortions, particularly in the z dimension, which causes disparities in the x, y, and z directions of the voxels with the original image stacks. As a result, reliable segmentation and feature extraction of these images may be difficult or even impossible. Image interpolation is especially needed for the correction of undersampling artifact in the axial plane of three-dimensional images generated by a confocal microscope to obtain cubic voxels. In this work, we present an adaptive cubic B-spline-based interpolation with the aid of lookup tables by deriving adaptive weights based on local gradients for the sampling nodes in the interpolation formulae. Thus, the proposed method enhances the axial resolution of confocal images by improving the accuracy of the interpolated value simultaneously with great reduction in computational cost. Numerical experimental results confirm the effectiveness of the proposed interpolation approach and demonstrate its superiority both in terms of accuracy and speed compared to other interpolation algorithms.

  18. FEMHD: An adaptive finite element method for MHD and edge modelling

    SciTech Connect

    Strauss, H.R.

    1995-07-01

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  19. toolkit computational mesh conceptual model.

    SciTech Connect

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-03-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  20. Visualization of 3D Geological Data using COLLADA and KML

    NASA Astrophysics Data System (ADS)

    Choi, Yosoon; Um, Jeong-Gi; Park, Myong-Ho

    2013-04-01

    This study presents a method to visualize 3D geological data using COLLAborative Design Activity(COLLADA, an open standard XML schema for establishing interactive 3D applications) and Keyhole Markup Language(KML, the XML-based scripting language of Google Earth).We used COLLADA files to represent different 3D geological data such as borehole, fence section, surface-based 3D volume and 3D grid by triangle meshes(a set of triangles connected by their common edges or corners). The COLLADA files were imported into the 3D render window of Google Earth using KML codes. An application to the Grosmont formation in Alberta, Canada showed that the combination of COLLADA and KML enables Google Earth to visualize 3D geological structures and properties.

  1. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    SciTech Connect

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  2. PSH3D fast Poisson solver for petascale DNS

    NASA Astrophysics Data System (ADS)

    Adams, Darren; Dodd, Michael; Ferrante, Antonino

    2016-11-01

    Direct numerical simulation (DNS) of high Reynolds number, Re >= O (105) , turbulent flows requires computational meshes >= O (1012) grid points, and, thus, the use of petascale supercomputers. DNS often requires the solution of a Helmholtz (or Poisson) equation for pressure, which constitutes the bottleneck of the solver. We have developed a parallel solver of the Helmholtz equation in 3D, PSH3D. The numerical method underlying PSH3D combines a parallel 2D Fast Fourier transform in two spatial directions, and a parallel linear solver in the third direction. For computational meshes up to 81923 grid points, our numerical results show that PSH3D scales up to at least 262k cores of Cray XT5 (Blue Waters). PSH3D has a peak performance 6 × faster than 3D FFT-based methods when used with the 'partial-global' optimization, and for a 81923 mesh solves the Poisson equation in 1 sec using 128k cores. Also, we have verified that the use of PSH3D with the 'partial-global' optimization in our DNS solver does not reduce the accuracy of the numerical solution of the incompressible Navier-Stokes equations.

  3. FUN3D Manual: 12.5

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  4. FUN3D Manual: 12.4

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2014-01-01

    This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  5. FUN3D Manual: 12.6

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.; Thomas, James L.; Wood, William A.

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.6, in