An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform
NASA Astrophysics Data System (ADS)
Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra
2011-06-01
The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.
Shi, Weiwei; Anderson, Mark J; Tulkoff, Joshua B; Kennedy, Brook S; Boreyko, Jonathan B
2018-04-11
Fog harvesting is a useful technique for obtaining fresh water in arid climates. The wire meshes currently utilized for fog harvesting suffer from dual constraints: coarse meshes cannot efficiently capture microscopic fog droplets, whereas fine meshes suffer from clogging issues. Here, we design and fabricate fog harvesters comprising an array of vertical wires, which we call "fog harps". Under controlled laboratory conditions, the fog-harvesting rates for fog harps with three different wire diameters were compared to conventional meshes of equivalent dimensions. As expected for the mesh structures, the mid-sized wires exhibited the largest fog collection rate, with a drop-off in performance for the fine or coarse meshes. In contrast, the fog-harvesting rate continually increased with decreasing wire diameter for the fog harps due to efficient droplet shedding that prevented clogging. This resulted in a 3-fold enhancement in the fog-harvesting rate for the harp design compared to an equivalent mesh.
Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation
NASA Astrophysics Data System (ADS)
Yi, Ce
With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.
Refreshing Music: Fog Harvesting with Harps
NASA Astrophysics Data System (ADS)
Shi, Weiwei; Anderson, Mark; Kennedy, Brook; Boreyko, Jonathan
2017-11-01
Fog harvesting is a useful technique for obtaining fresh water in arid climates. The wire meshes currently utilized for fog harvesting suffer from dual constraints: coarse meshes cannot efficiently capture fog, while fine meshes suffer from clogging issues. Here, we design a new type of fog harvester comprised of an array of vertical wires, which we call ``fog harps.'' To investigate the water collection efficiency, three fog harps were designed with different diameters (254 μm, 508 μm and 1.30 mm) but the same pitch-to-diameter ratio of 2. For comparison, three different size meshes were purchased with equivalent dimensions. As expected for the mesh structures, the mid-sized wires performed the best, with a drop-off in performance for the fine or coarse meshes. In contrast, the fog harvesting rate continually increased with decreasing wire diameter for the fog harps, due to its low hysteresis that prevented droplet clogging. This resulted in a 3-fold enhancement in the fog harvesting rate for the harp form factor compared to the mesh. The lack of a performance ceiling for the harps suggest that even greater enhancements could be achieved by scaling down to yet smaller sizes.
Mesh size effects on assessments of planktonic hydrozoan abundance and assemblage structure
NASA Astrophysics Data System (ADS)
Nogueira Júnior, Miodeli; Pukanski, Luis Eduardo de M.; Souza-Conceição, José M.
2015-04-01
The choice of appropriate mesh-size is paramount to accurately quantify planktonic assemblages, however there is no such information available for hydrozoans. Here planktonic hydrozoan abundance and assemblage structure were compared using 200 and 500 μm meshes at Babitonga estuary (S Brazil), throughout a year cycle. Species richness and Shannon-Wiener diversity were higher in the 200 μm mesh, while evenness was typically higher in the 500 μm. Assemblage structure was significantly different between meshes (PERMANOVA, P < 0.05; n = 72 pairs of samples) both regarding taxa and size composition. These discrepancies are due to significant underestimation of small hydromedusae by the coarse mesh, like Obelia spp., young Liriope tetraphylla, Podocoryna loyola and others. Yet, larger taxa like Eucheilota maculata and adult L. tetraphylla were more abundant in the coarse mesh on some occasions and others such as Blackfordia virginica and Muggiaea kochi were similarly represented in both meshes. Overall collection efficiency of the coarse mesh (CE500) was 14.4%, with monthly averages between 1.6% and 43.0%, in July (winter) and January (summer) respectively. Differences between the meshes were size-dependent; CE500 was ~ 0.3% for hydrozoans sizing < 0.5 mm, ~ 21% for those between 1 and 2 mm, ~ 56% for those between 2 and 4 mm, and nearly 100% for larger ones, reaching up to 312% for hydrozoans > 8 mm in October. These results suggest that both meshes have their drawbacks and the best choice would depend on the objectives of each study. Nevertheless species richness, total abundances and most taxa were better represented by the 200 μm mesh, suggesting that it is more appropriate to quantitatively sample planktonic hydrozoan assemblages.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2000-01-01
Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, W.
2012-07-01
Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less
Vlyssides, Apostolos G; Mai, Sofia T H; Barampouti, Elli Maria P; Loukakis, Haralampos N
2009-07-01
To estimate the influence of gravel mesh (fine and coarse) and vegetation (Phragmites and Arundo) on the efficiency of a reed bed, a pilot plant was included after the wastewater treatment plant of a cosmetic industry treatment system according to a 22 factorial experimental design. The maximum biochemical oxygen demand (BOD5), chemical oxygen demand (COD) and total phosphorous (TP) reduction was observed in the reactor, where Phragmites and fine gravel were used. In the reactor with Phragmites and coarse gravel, the maximum total Kjeldahl nitrogen (TKN) and total suspended solids (TSS) reduction was observed. The maximum total solids reduction was measured in the reed bed, which was filled with Arundo and coarse gravel. Conclusively, the treatment of a cosmetic industry's wastewater by reed beds as a tertiary treatment method is quite effective.
Adaptive Skin Meshes Coarsening for Biomolecular Simulation
Shi, Xinwei; Koehl, Patrice
2011-01-01
In this paper, we present efficient algorithms for generating hierarchical molecular skin meshes with decreasing size and guaranteed quality. Our algorithms generate a sequence of coarse meshes for both the surfaces and the bounded volumes. Each coarser surface mesh is adaptive to the surface curvature and maintains the topology of the skin surface with guaranteed mesh quality. The corresponding tetrahedral mesh is conforming to the interface surface mesh and contains high quality tetrahedral that decompose both the interior of the molecule and the surrounding region (enclosed in a sphere). Our hierarchical tetrahedral meshes have a number of advantages that will facilitate fast and accurate multigrid PDE solvers. Firstly, the quality of both the surface triangulations and tetrahedral meshes is guaranteed. Secondly, the interface in the tetrahedral mesh is an accurate approximation of the molecular boundary. In particular, all the boundary points lie on the skin surface. Thirdly, our meshes are Delaunay meshes. Finally, the meshes are adaptive to the geometry. PMID:21779137
NASA Astrophysics Data System (ADS)
Chen, Hongju; Yu, Hao; Liu, Guangxing
2016-12-01
Selection of net with a suitable mesh size is a key concern in the quantitative assessment of zooplankton, which is crucial to understand pelagic ecosystem processes. This study compared the copepod collecting efficiency of three commonly used plankton nets, namely, the China standard coarse net (505 μm mesh), the China standard fine net (77 μm), and the WP-2 net (200 μm). The experiment was performed at six stations in the Bohai Sea during the autumn of 2012. The coarse net substantially under-sampled small individuals (body widths < 672 μm) and led to the lowest species number in each tow, whereas the fine net collected all small copepod species but failed to collect rare species. The WP-2 net appeared to be a compromise of the two other nets, collecting both small copepods and rare species. The abundance of copepods collected by the coarse net (126.4 ± 86.5 ind m-3) was one to two orders of magnitude lower than that by the WP-2 net (5802.4 ± 2595.4 ind m-3), and the value of the fine net (11117.0 ± 4563.41 ind m-3) was nearly twice that of the WP-2 net. The abundance of large copepods ( i.e., adult Calanus sinicus) in the three nets showed no significant differences, but the abundance of small copepods declined with decreasing mesh size. The difference in abundance resulted from the under-sampling of small copepods with body widths < 672 μm and < 266 μm by the coarse and WP-2 nets, respectively.
A Parallel Cartesian Approach for External Aerodynamics of Vehicles with Complex Geometry
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.
2001-01-01
This workshop paper presents the current status in the development of a new approach for the solution of the Euler equations on Cartesian meshes with embedded boundaries in three dimensions on distributed and shared memory architectures. The approach uses adaptively refined Cartesian hexahedra to fill the computational domain. Where these cells intersect the geometry, they are cut by the boundary into arbitrarily shaped polyhedra which receive special treatment by the solver. The presentation documents a newly developed multilevel upwind solver based on a flexible domain-decomposition strategy. One novel aspect of the work is its use of space-filling curves (SFC) for memory efficient on-the-fly parallelization, dynamic re-partitioning and automatic coarse mesh generation. Within each subdomain the approach employs a variety reordering techniques so that relevant data are on the same page in memory permitting high-performance on cache-based processors. Details of the on-the-fly SFC based partitioning are presented as are construction rules for the automatic coarse mesh generation. After describing the approach, the paper uses model problems and 3- D configurations to both verify and validate the solver. The model problems demonstrate that second-order accuracy is maintained despite the presence of the irregular cut-cells in the mesh. In addition, it examines both parallel efficiency and convergence behavior. These investigations demonstrate a parallel speed-up in excess of 28 on 32 processors of an SGI Origin 2000 system and confirm that mesh partitioning has no effect on convergence behavior.
Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2016-08-18
In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less
Three dimensional unstructured multigrid for the Euler equations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1991-01-01
The three dimensional Euler equations are solved on unstructured tetrahedral meshes using a multigrid strategy. The driving algorithm consists of an explicit vertex-based finite element scheme, which employs an edge-based data structure to assemble the residuals. The multigrid approach employs a sequence of independently generated coarse and fine meshes to accelerate the convergence to steady-state of the fine grid solution. Variables, residuals and corrections are passed back and forth between the various grids of the sequence using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using linear interpolation. The addresses and weights for interpolation are determined in a preprocessing stage using an efficient graph traversal algorithm. The preprocessing operation is shown to require a negligible fraction of the CPU time required by the overall solution procedure, while gains in overall solution efficiencies greater than an order of magnitude are demonstrated on meshes containing up to 350,000 vertices. Solutions using globally regenerated fine meshes as well as adaptively refined meshes are given.
Predicting mesh density for adaptive modelling of the global atmosphere.
Weller, Hilary
2009-11-28
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
An engineering closure for heavily under-resolved coarse-grid CFD in large applications
NASA Astrophysics Data System (ADS)
Class, Andreas G.; Yu, Fujiang; Jordan, Thomas
2016-11-01
Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Paraschivoiu, Marius
1998-01-01
We present a finite element technique for the efficient generation of lower and upper bounds to outputs which are linear functionals of the solutions to the incompressible Stokes equations in two space dimensions; the finite element discretization is effected by Crouzeix-Raviart elements, the discontinuous pressure approximation of which is central to our approach. The bounds are based upon the construction of an augmented Lagrangian: the objective is a quadratic "energy" reformulation of the desired output; the constraints are the finite element equilibrium equations (including the incompressibility constraint), and the intersubdomain continuity conditions on velocity. Appeal to the dual max-min problem for appropriately chosen candidate Lagrange multipliers then yields inexpensive bounds for the output associated with a fine-mesh discretization; the Lagrange multipliers are generated by exploiting an associated coarse-mesh approximation. In addition to the requisite coarse-mesh calculations, the bound technique requires solution only of local subdomain Stokes problems on the fine-mesh. The method is illustrated for the Stokes equations, in which the outputs of interest are the flowrate past, and the lift force on, a body immersed in a channel.
NONLINEAR MULTIGRID SOLVER EXPLOITING AMGe COARSE SPACES WITH APPROXIMATION PROPERTIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christensen, Max La Cour; Villa, Umberto E.; Engsig-Karup, Allan P.
The paper introduces a nonlinear multigrid solver for mixed nite element discretizations based on the Full Approximation Scheme (FAS) and element-based Algebraic Multigrid (AMGe). The main motivation to use FAS for unstruc- tured problems is the guaranteed approximation property of the AMGe coarse spaces that were developed recently at Lawrence Livermore National Laboratory. These give the ability to derive stable and accurate coarse nonlinear discretization problems. The previous attempts (including ones with the original AMGe method, [5, 11]), were less successful due to lack of such good approximation properties of the coarse spaces. With coarse spaces with approximation properties, ourmore » FAS approach on un- structured meshes should be as powerful/successful as FAS on geometrically re ned meshes. For comparison, Newton's method and Picard iterations with an inner state-of-the-art linear solver is compared to FAS on a nonlinear saddle point problem with applications to porous media ow. It is demonstrated that FAS is faster than Newton's method and Picard iterations for the experiments considered here. Due to the guaranteed approximation properties of our AMGe, the coarse spaces are very accurate, providing a solver with the potential for mesh-independent convergence on general unstructured meshes.« less
Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...
2018-01-30
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Zulian, Patrick; Benson, Thomas
This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
NASA Astrophysics Data System (ADS)
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-05-01
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
A high-order multiscale finite-element method for time-domain acoustic-wave modeling
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-04
Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less
Hybrid seine for full fish community collections
McKenna, James E.; Waldt, Emily M.; Abbett, Ross; David, Anthony; Snyder, James
2013-01-01
Seines are simple and effective fish collection gears, but the net mesh size influences how well the catch represents the fish communities. We designed and tested a hybrid seine with a dual-mesh bag (1/4″ and 1/8″) and compared the fish assemblage collected by each mesh. The fine-mesh net retained three times as many fish and collected more species (as many as eight), including representatives of several rare species, than did the coarser mesh. The dual-mesh bag permitted us to compare both sizes and species retained by each layer and to develop species-specific abundance correction factors, which allowed comparison of catches with the coarse-mesh seine used for earlier collections. The results indicate that a hybrid seine with coarse-mesh wings and a fine-mesh bag would enhance future studies of fish communities, especially when small-bodied fishes or early life stages are the research focus.
Small herbivores suppress algal accumulation on Agatti atoll, Indian Ocean
NASA Astrophysics Data System (ADS)
Cernohorsky, Nicole H.; McClanahan, Timothy R.; Babu, Idrees; Horsák, Michal
2015-12-01
Despite large herbivorous fish being generally accepted as the main group responsible for preventing algal accumulation on coral reefs, few studies have experimentally examined the relative importance of herbivore size on algal communities. This study used exclusion cages with two different mesh sizes (1 × 1 cm and 6 × 6 cm) to investigate the impact of different-sized herbivores on algal accumulation rates on the shallow (<2 m) back-reef of Agatti atoll, Lakshadweep. The fine-mesh cages excluded all visible herbivores, which had rapid and lasting effects on the benthic communities, and, after 127 d of deployment, there was a visible and significant increase in algae (mainly macroalgae) with algal volume being 13 times greater than in adjacent open areas. The coarse-mesh cages excluded larger fishes (>8 cm body depth) while allowing smaller fishes to access the plots. In contrast to the conclusions of most previous studies, the exclusion of large herbivores had no significant effect on the accumulation of benthic algae and the amount of algae present within the coarse-mesh cages was relatively consistent throughout the experimental period (around 50 % coverage and 1-2 mm height). The difference in algal accumulation between the fine-mesh and coarse-mesh cages appears to be related to the actions of small individuals from 12 herbivorous fish species (0.17 ind. m-2 and 7.7 g m-2) that were able to enter through the coarse mesh. Although restricted to a single habitat, these results suggest that when present in sufficient densities and diversity, small herbivorous fishes can prevent the accumulation of algal biomass on coral reefs.
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
Gao, Kai; Fu, Shubin; Chung, Eric T.
2018-02-13
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Chung, Eric T.
The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, J E; Vassilevski, P S; Woodward, C S
This paper provides extensions of an element agglomeration AMG method to nonlinear elliptic problems discretized by the finite element method on general unstructured meshes. The method constructs coarse discretization spaces and corresponding coarse nonlinear operators as well as their Jacobians. We introduce both standard (fairly quasi-uniformly coarsened) and non-standard (coarsened away) coarse meshes and respective finite element spaces. We use both kind of spaces in FAS type coarse subspace correction (or Schwarz) algorithms. Their performance is illustrated on a number of model problems. The coarsened away spaces seem to perform better than the standard spaces for problems with nonlinearities inmore » the principal part of the elliptic operator.« less
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Jackson, Karen E.; Lyle, Karen H.; Spellman, Regina L.
2006-01-01
A study was performed to examine the influence of varying mesh density on an LS-DYNA simulation of a rectangular-shaped foam projectile impacting the space shuttle leading edge Panel 6. The shuttle leading-edge panels are fabricated of reinforced carbon-carbon (RCC) material. During the study, nine cases were executed with all possible combinations of coarse, baseline, and fine meshes of the foam and panel. For each simulation, the same material properties and impact conditions were specified and only the mesh density was varied. In the baseline model, the shell elements representing the RCC panel are approximately 0.2-in. on edge, whereas the foam elements are about 0.5-in. on edge. The element nominal edge-length for the baseline panel was halved to create a fine panel (0.1-in. edge length) mesh and doubled to create a coarse panel (0.4-in. edge length) mesh. In addition, the element nominal edge-length of the baseline foam projectile was halved (0.25-in. edge length) to create a fine foam mesh and doubled (1.0-in. edge length) to create a coarse foam mesh. The initial impact velocity of the foam was 775 ft/s. The simulations were executed in LS-DYNA for 6 ms of simulation time. Contour plots of resultant panel displacement and effective stress in the foam were compared at four discrete time intervals. Also, time-history responses of internal and kinetic energy of the panel, kinetic and hourglass energy of the foam, and resultant contact force were plotted to determine the influence of mesh density.
NASA Technical Reports Server (NTRS)
Lutz, R. J.; Spar, J.
1978-01-01
The Hansen atmospheric model was used to compute five monthly forecasts (October 1976 through February 1977). The comparison is based on an energetics analysis, meridional and vertical profiles, error statistics, and prognostic and observed mean maps. The monthly mean model simulations suffer from several defects. There is, in general, no skill in the simulation of the monthly mean sea-level pressure field, and only marginal skill is indicated for the 850 mb temperatures and 500 mb heights. The coarse-mesh model appears to generate a less satisfactory monthly mean simulation than the finer mesh GISS model.
NASA Astrophysics Data System (ADS)
Karimi-Fard, M.; Durlofsky, L. J.
2016-10-01
A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.
NASA Technical Reports Server (NTRS)
Jentink, Thomas Neil; Usab, William J., Jr.
1990-01-01
An explicit, Multigrid algorithm was written to solve the Euler and Navier-Stokes equations with special consideration given to the coarse mesh boundary conditions. These are formulated in a manner consistent with the interior solution, utilizing forcing terms to prevent coarse-mesh truncation error from affecting the fine-mesh solution. A 4-Stage Hybrid Runge-Kutta Scheme is used to advance the solution in time, and Multigrid convergence is further enhanced by using local time-stepping and implicit residual smoothing. Details of the algorithm are presented along with a description of Jameson's standard Multigrid method and a new approach to formulating the Multigrid equations.
An adaptive embedded mesh procedure for leading-edge vortex flows
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Beer, Michael A.; Law, Glenn W.
1989-01-01
A procedure for solving the conical Euler equations on an adaptively refined mesh is presented, along with a method for determining which cells to refine. The solution procedure is a central-difference cell-vertex scheme. The adaptation procedure is made up of a parameter on which the refinement decision is based, and a method for choosing a threshold value of the parameter. The refinement parameter is a measure of mesh-convergence, constructed by comparison of locally coarse- and fine-grid solutions. The threshold for the refinement parameter is based on the curvature of the curve relating the number of cells flagged for refinement to the value of the refinement threshold. Results for three test cases are presented. The test problem is that of a delta wing at angle of attack in a supersonic free-stream. The resulting vortices and shocks are captured efficiently by the adaptive code.
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Fasanella, Edwin L.; Lyle, Karen H.; Spellman, Regina L.
2004-01-01
A study was performed to examine the influence of varying mesh density on an LS-DYNA simulation of a rectangular-shaped foam projectile impacting the space shuttle leading edge Panel 6. The shuttle leading-edge panels are fabricated of reinforced carbon-carbon (RCC) material. During the study, nine cases were executed with all possible combinations of coarse, baseline, and fine meshes of the foam and panel. For each simulation, the same material properties and impact conditions were specified and only the mesh density was varied. In the baseline model, the shell elements representing the RCC panel are approximately 0.2-in. on edge, whereas the foam elements are about 0.5-in. on edge. The element nominal edge-length for the baseline panel was halved to create a fine panel (0.1-in. edge length) mesh and doubled to create a coarse panel (0.4-in. edge length) mesh. In addition, the element nominal edge-length of the baseline foam projectile was halved (0.25-in. edge length) to create a fine foam mesh and doubled (1.0- in. edge length) to create a coarse foam mesh. The initial impact velocity of the foam was 775 ft/s. The simulations were executed in LS-DYNA version 960 for 6 ms of simulation time. Contour plots of resultant panel displacement and effective stress in the foam were compared at five discrete time intervals. Also, time-history responses of internal and kinetic energy of the panel, kinetic and hourglass energy of the foam, and resultant contact force were plotted to determine the influence of mesh density. As a final comparison, the model with a fine panel and fine foam mesh was executed with slightly different material properties for the RCC. For this model, the average degraded properties of the RCC were replaced with the maximum degraded properties. Similar comparisons of panel and foam responses were made for the average and maximum degraded models.
NASA Astrophysics Data System (ADS)
Hurtado, Daniel E.; Rojas, Guillermo
2018-04-01
Computer simulations constitute a powerful tool for studying the electrical activity of the human heart, but computational effort remains prohibitively high. In order to recover accurate conduction velocities and wavefront shapes, the mesh size in linear element (Q1) formulations cannot exceed 0.1 mm. Here we propose a novel non-conforming finite-element formulation for the non-linear cardiac electrophysiology problem that results in accurate wavefront shapes and lower mesh-dependance in the conduction velocity, while retaining the same number of global degrees of freedom as Q1 formulations. As a result, coarser discretizations of cardiac domains can be employed in simulations without significant loss of accuracy, thus reducing the overall computational effort. We demonstrate the applicability of our formulation in biventricular simulations using a coarse mesh size of ˜ 1 mm, and show that the activation wave pattern closely follows that obtained in fine-mesh simulations at a fraction of the computation time, thus improving the accuracy-efficiency trade-off of cardiac simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick; ...
2017-02-21
This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less
Multiscale modeling and simulation for nano/micro materials
NASA Astrophysics Data System (ADS)
Wang, Xianqiao
Continuum description and atomic description used to be two distinct methods in the community of modeling and simulations. Science and technology have become so advanced that our understanding of many physical phenomena involves the concepts of both. So our goal now is to build a bridge to make atoms and continua communicate with each other. Micromorphic theory (MMT) envisions a material body as a continuous collection of deformable particles; each possesses finite size and inner structure. It is considered as the most successful top-down formulation of a two-level continuum model to bridge the gap between the micro level and macro level. Therefore MMT can be expected to unveil many new classes of physical phenomena that fall beyond classical field theories. In this work, the constitutive equations for generalized Micromorphic thermoviscoelastic solid and generalized Micromorphic fluid have been formulated. To enlarge the domain of applicability of MMT, from nano, micro to macro, we take a bottom-up approach to re-derive the generalized atomistic field theory (AFT) comprehensively and completely and establish the relationship between AFT and MMT. Finite element (FE) method is then implemented to pursue the numerical solutions of the governing equations derived in AFT. When the finest mesh is used, i.e., the size of FE mesh is equal to the lattice constant of the material, the computational model becomes identical to molecular dynamics simulation. When a coarse mesh is used, the resulting model is a coarse-grained model, the majority of the degrees of freedom are eliminated and the computational cost is largely reduced. When the coarse mesh and finest mesh exist concurrently, i.e., the finest mesh is used in the critical regions and the coarser mesh is used in the far field, it leads naturally to a concurrent atomistic/continuum model. Atomic scale, coarse-grained scale and concurrent atomistic/continuum simulations have demonstrated the potential capability of AFT to simulate most grand challenging problems in nano/micro physics, and shown that AFT has the advantages of both atomic model and MMT. Therefore, AFT has accomplished the mission to bridge the gap between continuum mechanics and atomic physics.
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Serazio, C.; Chacon, L.; Lapenta, G.
2006-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)
Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images
NASA Astrophysics Data System (ADS)
Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.
2015-03-01
In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.
Dukerschein, J.T.; Gent, R.; Sauer, J.
1996-01-01
We evaluated the potential loss of target benthic macroinvertebrates from coarse-mesh field wash down of samples through a 1.18-mm mesh sieve nested on a 0.60-mm mesh sieve. Visible target organisms (midges, mayflies, and fingernail clams) in the 1.18-mm mesh sieve were removed from the sample and enumerated in the field. The entire contents of both sieves were preserved for subsequent laboratory enumeration under 4X magnification. Percent recoveries from each treatment were based on total intact organisms found in all sieves. Percent recovery for fingernail clams found in the field (31%) was lower than for mayflies (79%) and midges (88%). Laboratory enumeration of organisms retained by the 1.18-mm sieve yielded additional fingernail clams (to total 74% recovered in the field and lab), mayflies (to total 89%), and midges (to total 91%). If the 1.18-mm sieve is used alone in the field, it is adequate to monitor mayflies, midges >1 cm, and adult fingernail clams greater than or equal to 5.0 mm shell length.
NASA Astrophysics Data System (ADS)
Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad
2014-03-01
Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.
Convergence study of global meshing on enamel-cement-bracket finite element model
NASA Astrophysics Data System (ADS)
Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.
2017-09-01
This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.
Triangle Geometry Processing for Surface Modeling and Cartesian Grid Generation
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J. (Inventor); Melton, John E. (Inventor); Berger, Marsha J. (Inventor)
2002-01-01
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
Triangle geometry processing for surface modeling and cartesian grid generation
Aftosmis, Michael J [San Mateo, CA; Melton, John E [Hollister, CA; Berger, Marsha J [New York, NY
2002-09-03
Cartesian mesh generation is accomplished for component based geometries, by intersecting components subject to mesh generation to extract wetted surfaces with a geometry engine using adaptive precision arithmetic in a system which automatically breaks ties with respect to geometric degeneracies. During volume mesh generation, intersected surface triangulations are received to enable mesh generation with cell division of an initially coarse grid. The hexagonal cells are resolved, preserving the ability to directionally divide cells which are locally well aligned.
A multilevel correction adaptive finite element method for Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Hu, Guanghui; Xie, Hehu; Xu, Fei
2018-02-01
In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.
Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method
NASA Astrophysics Data System (ADS)
Klimczak, Marek; Cecot, Witold
2018-01-01
We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.
Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.
2001-01-01
A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.
Mesoscopic-microscopic spatial stochastic simulation with automatic system partitioning.
Hellander, Stefan; Hellander, Andreas; Petzold, Linda
2017-12-21
The reaction-diffusion master equation (RDME) is a model that allows for efficient on-lattice simulation of spatially resolved stochastic chemical kinetics. Compared to off-lattice hard-sphere simulations with Brownian dynamics or Green's function reaction dynamics, the RDME can be orders of magnitude faster if the lattice spacing can be chosen coarse enough. However, strongly diffusion-controlled reactions mandate a very fine mesh resolution for acceptable accuracy. It is common that reactions in the same model differ in their degree of diffusion control and therefore require different degrees of mesh resolution. This renders mesoscopic simulation inefficient for systems with multiscale properties. Mesoscopic-microscopic hybrid methods address this problem by resolving the most challenging reactions with a microscale, off-lattice simulation. However, all methods to date require manual partitioning of a system, effectively limiting their usefulness as "black-box" simulation codes. In this paper, we propose a hybrid simulation algorithm with automatic system partitioning based on indirect a priori error estimates. We demonstrate the accuracy and efficiency of the method on models of diffusion-controlled networks in 3D.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.
Reducing numerical costs for core wide nuclear reactor CFD simulations by the Coarse-Grid-CFD
NASA Astrophysics Data System (ADS)
Viellieber, Mathias; Class, Andreas G.
2013-11-01
Traditionally complete nuclear reactor core simulations are performed with subchannel analysis codes, that rely on experimental and empirical input. The Coarse-Grid-CFD (CGCFD) intends to replace the experimental or empirical input with CFD data. The reactor core consists of repetitive flow patterns, allowing the general approach of creating a parametrized model for one segment and composing many of those to obtain the entire reactor simulation. The method is based on a detailed and well-resolved CFD simulation of one representative segment. From this simulation we extract so-called parametrized volumetric forces which close, an otherwise strongly under resolved, coarsely-meshed model of a complete reactor setup. While the formulation so far accounts for forces created internally in the fluid others e.g. obstruction and flow deviation through spacers and wire wraps, still need to be accounted for if the geometric details are not represented in the coarse mesh. These are modelled with an Anisotropic Porosity Formulation (APF). This work focuses on the application of the CGCFD to a complete reactor core setup and the accomplishment of the parametrization of the volumetric forces.
Shrink-wrapped isosurface from cross sectional images
Choi, Y. K.; Hahn, J. K.
2010-01-01
Summary This paper addresses a new surface reconstruction scheme for approximating the isosurface from a set of tomographic cross sectional images. Differently from the novel Marching Cubes (MC) algorithm, our method does not extract the iso-density surface (isosurface) directly from the voxel data but calculates the iso-density point (isopoint) first. After building a coarse initial mesh approximating the ideal isosurface by the cell-boundary representation, it metamorphoses the mesh into the final isosurface by a relaxation scheme, called shrink-wrapping process. Compared with the MC algorithm, our method is robust and does not make any cracks on surface. Furthermore, since it is possible to utilize lots of additional isopoints during the surface reconstruction process by extending the adjacency definition, theoretically the resulting surface can be better in quality than the MC algorithm. According to experiments, it is proved to be very robust and efficient for isosurface reconstruction from cross sectional images. PMID:20703361
NASA Astrophysics Data System (ADS)
Skamarock, W. C.
2015-12-01
One of the major problems in atmospheric model applications is the representation of deep convection within the models; explicit simulation of deep convection on fine meshes performs much better than sub-grid parameterized deep convection on coarse meshes. Unfortunately, the high cost of explicit convective simulation has meant it has only been used to down-scale global simulations in weather prediction and regional climate applications, typically using traditional one-way interactive nesting technology. We have been performing real-time weather forecast tests using a global non-hydrostatic atmospheric model (the Model for Prediction Across Scales, MPAS) that employs a variable-resolution unstructured Voronoi horizontal mesh (nominally hexagons) to span hydrostatic to nonhydrostatic scales. The smoothly varying Voronoi mesh eliminates many downscaling problems encountered using traditional one- or two-way grid nesting. Our test weather forecasts cover two periods - the 2015 Spring Forecast Experiment conducted at the NOAA Storm Prediction Center during the month of May in which we used a 50-3 km mesh, and the PECAN field program examining nocturnal convection over the US during the months of June and July in which we used a 15-3 km mesh. An important aspect of this modeling system is that the model physics be scale-aware, particularly the deep convection parameterization. These MPAS simulations employ the Grell-Freitas scale-aware convection scheme. Our test forecasts show that the scheme produces a gradual transition in the deep convection, from the deep unstable convection being handled entirely by the convection scheme on the coarse mesh regions (dx > 15 km), to the deep convection being almost entirely explicit on the 3 km NA region of the meshes. We will present results illustrating the performance of critical aspects of the MPAS model in these tests.
Improving finite element results in modeling heart valve mechanics.
Earl, Emily; Mohammadi, Hadi
2018-06-01
Finite element analysis is a well-established computational tool which can be used for the analysis of soft tissue mechanics. Due to the structural complexity of the leaflet tissue of the heart valve, the currently available finite element models do not adequately represent the leaflet tissue. A method of addressing this issue is to implement computationally expensive finite element models, characterized by precise constitutive models including high-order and high-density mesh techniques. In this study, we introduce a novel numerical technique that enhances the results obtained from coarse mesh finite element models to provide accuracy comparable to that of fine mesh finite element models while maintaining a relatively low computational cost. Introduced in this study is a method by which the computational expense required to solve linear and nonlinear constitutive models, commonly used in heart valve mechanics simulations, is reduced while continuing to account for large and infinitesimal deformations. This continuum model is developed based on the least square algorithm procedure coupled with the finite difference method adhering to the assumption that the components of the strain tensor are available at all nodes of the finite element mesh model. The suggested numerical technique is easy to implement, practically efficient, and requires less computational time compared to currently available commercial finite element packages such as ANSYS and/or ABAQUS.
NASA Technical Reports Server (NTRS)
Chang, Chia-Bo
1994-01-01
This study is intended to examine the impact of the synthetic relative humidity on the model simulation of mesoscale convective storm environment. The synthetic relative humidity is derived from the National Weather Services surface observations, and non-conventional sources including aircraft, radar, and satellite observations. The latter sources provide the mesoscale data of very high spatial and temporal resolution. The synthetic humidity data is used to complement the National Weather Services rawinsonde observations. It is believed that a realistic representation of initial moisture field in a mesoscale model is critical for the model simulation of thunderstorm development, and the formation of non-convective clouds as well as their effects on the surface energy budget. The impact will be investigated based on a real-data case study using the mesoscale atmospheric simulation system developed by Mesoscale Environmental Simulations Operations, Inc. The mesoscale atmospheric simulation system consists of objective analysis and initialization codes, and the coarse-mesh and fine-mesh dynamic prediction models. Both models are a three dimensional, primitive equation model containing the essential moist physics for simulating and forecasting mesoscale convective processes in the atmosphere. The modeling system is currently implemented at the Applied Meteorology Unit, Kennedy Space Center. Two procedures involving the synthetic relative humidity to define the model initial moisture fields are considered. It is proposed to perform several short-range (approximately 6 hours) comparative coarse-mesh simulation experiments with and without the synthetic data. They are aimed at revealing the model sensitivities should allow us both to refine the specification of the observational requirements, and to develop more accurate and efficient objective analysis schemes. The goal is to advance the MASS (Mesoscal Atmospheric Simulation System) modeling expertise so that the model output can provide reliable guidance for thunderstorm forecasting.
Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, N. A.; Forget, B.
2012-07-01
The Discrete Generalized Multigroup (DGM) method uses discrete Legendre orthogonal polynomials to expand the energy dependence of the multigroup neutron transport equation. This allows a solution on a fine energy mesh to be approximated for a cost comparable to a solution on a coarse energy mesh. The DGM method is applied to an ultra-fine energy mesh (14,767 groups) to avoid using self-shielding methodologies without introducing the cost usually associated with such energy discretization. Results show DGM to converge to the reference ultra-fine solution after a small number of recondensation steps for multiple infinite medium compositions. (authors)
Electrostatic interactions in soft particle systems: mesoscale simulations of ionic liquids.
Wang, Yong-Lei; Zhu, You-Liang; Lu, Zhong-Yuan; Laaksonen, Aatto
2018-05-21
Computer simulations provide a unique insight into the microscopic details, molecular interactions and dynamic behavior responsible for many distinct physicochemical properties of ionic liquids. Due to the sluggish and heterogeneous dynamics and the long-ranged nanostructured nature of ionic liquids, coarse-grained meso-scale simulations provide an indispensable complement to detailed first-principles calculations and atomistic simulations allowing studies over extended length and time scales with a modest computational cost. Here, we present extensive coarse-grained simulations on a series of ionic liquids of the 1-alkyl-3-methylimidazolium (alkyl = butyl, heptyl-, and decyl-) family with Cl, [BF4], and [PF6] counterions. Liquid densities, microstructures, translational diffusion coefficients, and re-orientational motion of these model ionic liquid systems have been systematically studied over a wide temperature range. The addition of neutral beads in cationic models leads to a transition of liquid morphologies from dispersed apolar beads in a polar framework to that characterized by bi-continuous sponge-like interpenetrating networks in liquid matrices. Translational diffusion coefficients of both cations and anions decrease upon lengthening of the neutral chains in the cationic models and by enlarging molecular sizes of the anionic groups. Similar features are observed in re-orientational motion and time scales of different cationic models within the studied temperature range. The comparison of the liquid properties of the ionic systems with their neutral counterparts indicates that the distinctive microstructures and dynamical quantities of the model ionic liquid systems are intrinsically related to Coulombic interactions. Finally, we compared the computational efficiencies of three linearly scaling O(N log N) Ewald summation methods, the particle-particle particle-mesh method, the particle-mesh Ewald summation method, and the Ewald summation method based on a non-uniform fast Fourier transform technique, to calculate electrostatic interactions. Coarse-grained simulations were performed using the GALAMOST and the GROMACS packages and hardware efficiently utilizing graphics processing units on a set of extended [1-decyl-3-methylimidazolium][BF4] ionic liquid systems of up to 131 072 ion pairs.
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Biedron, Robert T.; Diskin, Boris
2005-01-01
FMG3D (full multigrid 3 dimensions) is a pilot computer program that solves equations of fluid flow using a finite difference representation on a structured grid. Infrastructure exists for three dimensions but the current implementation treats only two dimensions. Written in Fortran 90, FMG3D takes advantage of the recursive subroutine feature, dynamic memory allocation, and structured-programming constructs of that language. FMG3D supports multi-block grids with three types of block-to-block interfaces: periodic, C-zero, and C-infinity. For all three types, grid points must match at interfaces. For periodic and C-infinity types, derivatives of grid metrics must be continuous at interfaces. The available equation sets are as follows: scalar elliptic equations, scalar convection equations, and the pressure-Poisson formulation of the Navier-Stokes equations for an incompressible fluid. All the equation sets are implemented with nonzero forcing functions to enable the use of user-specified solutions to assist in verification and validation. The equations are solved with a full multigrid scheme using a full approximation scheme to converge the solution on each succeeding grid level. Restriction to the next coarser mesh uses direct injection for variables and full weighting for residual quantities; prolongation of the coarse grid correction from the coarse mesh to the fine mesh uses bilinear interpolation; and prolongation of the coarse grid solution uses bicubic interpolation.
CFD Simulation of a Wing-In-Ground-Effect UAV
NASA Astrophysics Data System (ADS)
Lao, C. T.; Wong, E. T. T.
2018-05-01
This paper reports a numerical analysis on a wing section used for a Wing-In-Ground-Effect (WIG) unmanned aerial vehicle (UAV). The wing geometry was created by SolidWorks and the incompressible Reynolds-averaged Navier-Stokes (RANS) equations were solved with the Spalart–Allmaras turbulence model using CFD software ANSYS FLUENT. In FLUENT, the Spalart-Allmaras model has been implemented to use wall functions when the mesh resolution is not sufficiently fine. This might make it the best choice for relatively crude simulations on coarse meshes where accurate turbulent flow computations are not critical. The results show that the lift coefficient and lift-drag ratio derived excellent performance enhancement by ground effect. However, the moment coefficient shows inconsistency when the wing is operating in very low altitude - this is owing to the difficulty on the stability control of WIG vehicle. A drag polar estimation based on the analysis also indicated that the Oswald (or span) efficiency of the wing was improved by ground effect.
NASA Astrophysics Data System (ADS)
Abani, Neerav; Reitz, Rolf D.
2010-09-01
An advanced mixing model was applied to study engine emissions and combustion with different injection strategies ranging from multiple injections, early injection and grouped-hole nozzle injection in light and heavy duty diesel engines. The model was implemented in the KIVA-CHEMKIN engine combustion code and simulations were conducted at different mesh resolutions. The model was compared with the standard KIVA spray model that uses the Lagrangian-Drop and Eulerian-Fluid (LDEF) approach, and a Gas Jet spray model that improves predictions of liquid sprays. A Vapor Particle Method (VPM) is introduced that accounts for sub-grid scale mixing of fuel vapor and more accurately and predicts the mixing of fuel-vapor over a range of mesh resolutions. The fuel vapor is transported as particles until a certain distance from nozzle is reached where the local jet half-width is adequately resolved by the local mesh scale. Within this distance the vapor particle is transported while releasing fuel vapor locally, as determined by a weighting factor. The VPM model more accurately predicts fuel-vapor penetrations for early cycle injections and flame lift-off lengths for late cycle injections. Engine combustion computations show that as compared to the standard KIVA and Gas Jet spray models, the VPM spray model improves predictions of in-cylinder pressure, heat released rate and engine emissions of NOx, CO and soot with coarse mesh resolutions. The VPM spray model is thus a good tool for efficiently investigating diesel engine combustion with practical mesh resolutions, thereby saving computer time.
NASA Astrophysics Data System (ADS)
Chung, N.; Suberkopp, K.
2005-05-01
The effect of shredder feeding on aquatic hyphomycete communities associated with submerged leaves was studied in two southern Appalachian headwater streams in North Carolina. Coarse and fine mesh litter bags containing red maple (Acer rubrum) leaves were placed in the nutrient-enriched stream and in the reference stream and were retrieved monthly. Both shredder feeding and nutrient enrichment enhanced breakdown rates. The breakdown rates of leaves in coarse mesh bags in the reference stream (k = 0.0275) and fine mesh bags in the nutrient enriched stream (k = 0.0272) were not significantly different, suggesting that the shredding effect on litter breakdown was offset by higher fungal activity as a result of nutrient enrichment. Fungal sporulation rates and biomass (based on ergosterol concentrations) were higher in the nutrient enriched than in the reference stream, but neither fungal biomass nor sporulation rate was affected by shredder feeding. Species richness was higher in the nutrient-enriched than in the reference stream. The enrichment with nutrients altered fungal community composition more than shredder feeding.
Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.
2015-01-01
Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Garcia-Cantero, Juan J.; Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis
2017-01-01
Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes. PMID:28690511
Mesh refinement strategy for optimal control problems
NASA Astrophysics Data System (ADS)
Paiva, L. T.; Fontes, F. A. C. C.
2013-10-01
Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.
Compact cell-centered discretization stencils at fine-coarse block structured grid interfaces
NASA Astrophysics Data System (ADS)
Pletzer, Alexander; Jamroz, Ben; Crockett, Robert; Sides, Scott
2014-03-01
Different strategies for coupling fine-coarse grid patches are explored in the context of the adaptive mesh refinement (AMR) method. We show that applying linear interpolation to fill in the fine grid ghost values can produce a finite volume stencil of comparable accuracy to quadratic interpolation provided the cell volumes are adjusted. The volume of fine cells expands whereas the volume of neighboring coarse cells contracts. The amount by which the cells contract/expand depends on whether the interface is a face, an edge, or a corner. It is shown that quadratic or better interpolation is required when the conductivity is spatially varying, anisotropic, the refinement ratio is other than two, or when the fine-coarse interface is concave.
Grid adaption using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
Grid adaptation using chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1994-01-01
The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.
Grid adaptation using Chimera composite overlapping meshes
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen
1993-01-01
The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.
NASA Technical Reports Server (NTRS)
Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.
2007-01-01
This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.
Summation rules for a fully nonlocal energy-based quasicontinuum method
NASA Astrophysics Data System (ADS)
Amelang, J. S.; Venturini, G. N.; Kochmann, D. M.
2015-09-01
The quasicontinuum (QC) method coarse-grains crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. A crucial cornerstone of all QC techniques, summation or quadrature rules efficiently approximate the thermodynamic quantities of interest. Here, we investigate summation rules for a fully nonlocal, energy-based QC method to approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of all atoms in the crystal lattice. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. We review traditional summation rules and discuss their strengths and weaknesses with a focus on energy approximation errors and spurious force artifacts. Moreover, we introduce summation rules which produce no residual or spurious force artifacts in centrosymmetric crystals in the large-element limit under arbitrary affine deformations in two dimensions (and marginal force artifacts in three dimensions), while allowing us to seamlessly bridge to full atomistics. Through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions, we compare the accuracy of the new scheme to various previous ones. Our results confirm that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors. Our numerical benchmark examples include the calculation of elastic constants from completely random QC meshes and the inhomogeneous deformation of aggressively coarse-grained crystals containing nano-voids. In the elastic regime, we directly compare QC results to those of full atomistics to assess global and local errors in complex QC simulations. Going beyond elasticity, we illustrate the performance of the energy-based QC method with the new second-order summation rule by the help of nanoindentation examples with automatic mesh adaptation. Overall, our findings provide guidelines for the selection of summation rules for the fully nonlocal energy-based QC method.
On a turbulent wall model to predict hemolysis numerically in medical devices
NASA Astrophysics Data System (ADS)
Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung
2017-11-01
Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.
Tetrahedral and polyhedral mesh evaluation for cerebral hemodynamic simulation--a comparison.
Spiegel, Martin; Redel, Thomas; Zhang, Y; Struffert, Tobias; Hornegger, Joachim; Grossman, Robert G; Doerfler, Arnd; Karmonik, Christof
2009-01-01
Computational fluid dynamic (CFD) based on patient-specific medical imaging data has found widespread use for visualizing and quantifying hemodynamics in cerebrovascular disease such as cerebral aneurysms or stenotic vessels. This paper focuses on optimizing mesh parameters for CFD simulation of cerebral aneurysms. Valid blood flow simulations strongly depend on the mesh quality. Meshes with a coarse spatial resolution may lead to an inaccurate flow pattern. Meshes with a large number of elements will result in unnecessarily high computation time which is undesirable should CFD be used for planning in the interventional setting. Most CFD simulations reported for these vascular pathologies have used tetrahedral meshes. We illustrate the use of polyhedral volume elements in comparison to tetrahedral meshing on two different geometries, a sidewall aneurysm of the internal carotid artery and a basilar bifurcation aneurysm. The spatial mesh resolution ranges between 5,119 and 228,118 volume elements. The evaluation of the different meshes was based on the wall shear stress previously identified as a one possible parameter for assessing aneurysm growth. Polyhedral meshes showed better accuracy, lower memory demand, shorter computational speed and faster convergence behavior (on average 369 iterations less).
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.
Hybrid continuum-coarse-grained modeling of erythrocytes
NASA Astrophysics Data System (ADS)
Lyu, Jinming; Chen, Paul G.; Boedec, Gwenn; Leonetti, Marc; Jaeger, Marc
2018-06-01
The red blood cell (RBC) membrane is a composite structure, consisting of a phospholipid bilayer and an underlying membrane-associated cytoskeleton. Both continuum and particle-based coarse-grained RBC models make use of a set of vertices connected by edges to represent the RBC membrane, which can be seen as a triangular surface mesh for the former and a spring network for the latter. Here, we present a modeling approach combining an existing continuum vesicle model with a coarse-grained model for the cytoskeleton. Compared to other two-component approaches, our method relies on only one mesh, representing the cytoskeleton, whose velocity in the tangential direction of the membrane may be different from that of the lipid bilayer. The finitely extensible nonlinear elastic (FENE) spring force law in combination with a repulsive force defined as a power function (POW), called FENE-POW, is used to describe the elastic properties of the RBC membrane. The mechanical interaction between the lipid bilayer and the cytoskeleton is explicitly computed and incorporated into the vesicle model. Our model includes the fundamental mechanical properties of the RBC membrane, namely fluidity and bending rigidity of the lipid bilayer, and shear elasticity of the cytoskeleton while maintaining surface-area and volume conservation constraint. We present three simulation examples to demonstrate the effectiveness of this hybrid continuum-coarse-grained model for the study of RBCs in fluid flows.
A multi-block adaptive solving technique based on lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao
2018-05-01
In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.
Li, Zuoping; Kindig, Matthew W; Subit, Damien; Kent, Richard W
2010-11-01
The purpose of this paper was to investigate the sensitivity of the structural responses and bone fractures of the ribs to mesh density, cortical thickness, and material properties so as to provide guidelines for the development of finite element (FE) thorax models used in impact biomechanics. Subject-specific FE models of the second, fourth, sixth and tenth ribs were developed to reproduce dynamic failure experiments. Sensitivity studies were then conducted to quantify the effects of variations in mesh density, cortical thickness, and material parameters on the model-predicted reaction force-displacement relationship, cortical strains, and bone fracture locations for all four ribs. Overall, it was demonstrated that rib FE models consisting of 2000-3000 trabecular hexahedral elements (weighted element length 2-3mm) and associated quadrilateral cortical shell elements with variable thickness more closely predicted the rib structural responses and bone fracture force-failure displacement relationships observed in the experiments (except the fracture locations), compared to models with constant cortical thickness. Further increases in mesh density increased computational cost but did not markedly improve model predictions. A ±30% change in the major material parameters of cortical bone lead to a -16.7 to 33.3% change in fracture displacement and -22.5 to +19.1% change in the fracture force. The results in this study suggest that human rib structural responses can be modeled in an accurate and computationally efficient way using (a) a coarse mesh of 2000-3000 solid elements, (b) cortical shells elements with variable thickness distribution and (c) a rate-dependent elastic-plastic material model. Copyright © 2010 IPEM. Published by Elsevier Ltd. All rights reserved.
Edge delamination of composite laminates subject to combined tension and torsional loading
NASA Technical Reports Server (NTRS)
Hooper, Steven J.
1990-01-01
Delamination is a common failure mode of laminated composite materials. Edge delamination is important since it results in reduced stiffness and strength of the laminate. The tension/torsion load condition is of particular significance to the structural integrity of composite helicopter rotor systems. Material coupons can easily be tested under this type of loading in servo-hydraulic tension/torsion test stands using techniques very similar to those used for the Edge Delamination Tensile Test (EDT) delamination specimen. Edge delamination of specimens loaded in tension was successfully analyzed by several investigators using both classical laminate theory and quasi-three dimensional (Q3D) finite element techniques. The former analysis technique can be used to predict the total strain energy release rate, while the latter technique enables the calculation of the mixed-mode strain energy release rates. The Q3D analysis is very efficient since it produces a three-dimensional solution to a two-dimensional domain. A computer program was developed which generates PATRAN commands to generate the finite element model. PATRAN is a pre- and post-processor which is commonly used with a variety of finite element programs such as MCS/NASTRAN. The program creates a sufficiently dense mesh at the delamination crack tips to support a mixed-mode fracture mechanics analysis. The program creates a coarse mesh in those regions where the gradients in the stress field are low (away from the delamination regions). A transition mesh is defined between these regions. This program is capable of generating a mesh for an arbitrarily oriented matrix crack. This program significantly reduces the modeling time required to generate these finite element meshes, thus providing a realistic tool with which to investigate the tension torsion problem.
System and method for producing metallic iron nodules
Bleifuss, Rodney L [Grand Rapids, MN; Englund, David J [Bovey, MN; Iwasaki, Iwao [Grand Rapids, MN; Lindgren, Andrew J [Grand Rapids, MN; Kiesel, Richard F [Hibbing, MN
2011-09-20
A method for producing metallic iron nodules by assembling a shielding entry system to introduce coarse carbonaceous material greater than 6 mesh in to the furnace atmosphere at location(s) where the temperature of the furnace atmosphere adjacent at least partially reduced reducible iron bearing material is between about 2200 and 2650.degree. F. (1200 and 1450.degree. C.), the shielding entry system adapted to inhibit emission of infrared radiation from the furnace atmosphere and seal the furnace atmosphere from exterior atmosphere while introducing coarse carbonaceous material greater than 6 mesh into the furnace to be distributed over the at least partially reduced reducible iron bearing material, and heating the covered at least partially reduced reducible iron bearing material in a fusion atmosphere to assist in fusion and inhibit reoxidation of the reduced material during fusion to assist in fusion and inhibit reoxidation of the reduced material in forming metallic iron nodules.
Texturing of continuous LOD meshes with the hierarchical texture atlas
NASA Astrophysics Data System (ADS)
Birkholz, Hermann
2006-02-01
For the rendering of detailed virtual environments, trade-offs have to be made between image quality and rendering time. An immersive experience of virtual reality always demands high frame-rates with the best reachable image qual-ity. Continuous Level of Detail (cLoD) triangle-meshes provide an continuous spectrum of detail for a triangle mesh that can be used to create view-dependent approximations of the environment in real-time. This enables the rendering with a constant number of triangles and thus with constant frame-rates. Normally the construction of such cLoD mesh representations leads to the loss of all texture information of the original mesh. To overcome this problem, a parameter domain can be created, in order to map the surface properties (colour, texture, normal) to it. This parameter domain can be used to map the surface properties back to arbitrary approximations of the original mesh. The parameter domain is often a simplified version of the mesh to be parameterised. This limits the reachable simplification to the domain mesh which has to map the surface of the original mesh with the least possible stretch. In this paper, a hierarchical domain mesh is presented, that scales between very coarse domain meshes and good property-mapping.
MOC Efficiency Improvements Using a Jacobi Inscatter Approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2016-08-31
In recent weeks, attention has been given to resolving the convergence issues encountered with TCP 0 by trying a Jacobi (J) inscatter approach when group sweeping, where the inscatter source is constructed using the previous iteration flux. This is in contrast to a Gauss-Seidel (GS) approach, which has been the default to-date, where the scattering source uses the most up-to-date flux values. The former is consistent with CASMO, which has no issues with TCP 0 convergence. Testing this out on a variety of problems has demonstrated that the Jacobi approach does indeed provide substantially more stability, though can take moremore » outer iterations to converge. While this is not surprising, there are improvements that can be made to the MOC sweeper to capitalize on the Jacobi approximation and provide substantial speedup. For example, the loop over groups, which has traditionally been the outermost loop in MPACT, can be moved to the interior, avoiding duplicate modular ray trace and coarse ray trace setup (mapping coarse mesh surface indexes), which needs to be performed repeatedly when group is outermost.« less
Lidman, Johan; Jonsson, Micael; Burrows, Ryan M; Bundschuh, Mirco; Sponseller, Ryan A
2017-02-01
Although the importance of stream condition for leaf litter decomposition has been extensively studied, little is known about how processing rates change in response to altered riparian vegetation community composition. We investigated patterns of plant litter input and decomposition across 20 boreal headwater streams that varied in proportions of riparian deciduous and coniferous trees. We measured a suite of in-stream physical and chemical characteristics, as well as the amount and type of litter inputs from riparian vegetation, and related these to decomposition rates of native (alder, birch, and spruce) and introduced (lodgepole pine) litter species incubated in coarse- and fine-mesh bags. Total litter inputs ranged more than fivefold among sites and increased with the proportion of deciduous vegetation in the riparian zone. In line with differences in initial litter quality, mean decomposition rate was highest for alder, followed by birch, spruce, and lodgepole pine (12, 55, and 68% lower rates, respectively). Further, these rates were greater in coarse-mesh bags that allow colonization by macroinvertebrates. Variance in decomposition rate among sites for different species was best explained by different sets of environmental conditions, but litter-input composition (i.e., quality) was overall highly important. On average, native litter decomposed faster in sites with higher-quality litter input and (with the exception of spruce) higher concentrations of dissolved nutrients and open canopies. By contrast, lodgepole pine decomposed more rapidly in sites receiving lower-quality litter inputs. Birch litter decomposition rate in coarse-mesh bags was best predicted by the same environmental variables as in fine-mesh bags, with additional positive influences of macroinvertebrate species richness. Hence, to facilitate energy turnover in boreal headwaters, forest management with focus on conifer production should aim at increasing the presence of native deciduous trees along streams, as they promote conditions that favor higher decomposition rates of terrestrial plant litter.
NASA Astrophysics Data System (ADS)
Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.
2016-08-01
The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.
Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications
NASA Astrophysics Data System (ADS)
Blackburn, Megan Satterfield
2009-12-01
Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.
NASA Astrophysics Data System (ADS)
Zahr, M. J.; Persson, P.-O.
2018-07-01
This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs' phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs' phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately h / 2, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and undershoot in the solution, i.e., Gibbs' phenomena, may make it impossible to solve the discrete conservation law on non-aligned meshes. Therefore, we advocate a gradient-based, full space solver where the mesh and conservation law solution converge to their optimal values simultaneously and therefore never require the solution of the discrete conservation law on a non-aligned mesh. The merit of the proposed method is demonstrated on a number of one- and two-dimensional model problems including the L2 projection of discontinuous functions, Burgers' equation with a discontinuous source term, transonic flow through a nozzle, and supersonic flow around a bluff body. We demonstrate optimal O (h p + 1) convergence rates in the L1 norm for up to polynomial order p = 6 and show that accurate solutions can be obtained on extremely coarse meshes.
Tuminaro, Raymond S.; Perego, Mauro; Tezaur, Irina Kalashnikova; ...
2016-10-06
A multigrid method is proposed that combines ideas from matrix dependent multigrid for structured grids and algebraic multigrid for unstructured grids. It targets problems where a three-dimensional mesh can be viewed as an extrusion of a two-dimensional, unstructured mesh in a third dimension. Our motivation comes from the modeling of thin structures via finite elements and, more specifically, the modeling of ice sheets. Extruded meshes are relatively common for thin structures and often give rise to anisotropic problems when the thin direction mesh spacing is much smaller than the broad direction mesh spacing. Within our approach, the first few multigridmore » hierarchy levels are obtained by applying matrix dependent multigrid to semicoarsen in a structured thin direction fashion. After sufficient structured coarsening, the resulting mesh contains only a single layer corresponding to a two-dimensional, unstructured mesh. Algebraic multigrid can then be employed in a standard manner to create further coarse levels, as the anisotropic phenomena is no longer present in the single layer problem. The overall approach remains fully algebraic, with the minor exception that some additional information is needed to determine the extruded direction. Furthermore, this facilitates integration of the solver with a variety of different extruded mesh applications.« less
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
Application of p-Multigrid to Discontinuous Galerkin Formulations of the Poisson Equation
NASA Technical Reports Server (NTRS)
Helenbrook, B. T.; Atkins, H. L.
2006-01-01
We investigate p-multigrid as a solution method for several different discontinuous Galerkin (DG) formulations of the Poisson equation. Different combinations of relaxation schemes and basis sets have been combined with the DG formulations to find the best performing combination. The damping factors of the schemes have been determined using Fourier analysis for both one and two-dimensional problems. One important finding is that when using DG formulations, the standard approach of forming the coarse p matrices separately for each level of multigrid is often unstable. To ensure stability the coarse p matrices must be constructed from the fine grid matrices using algebraic multigrid techniques. Of the relaxation schemes, we find that the combination of Jacobi relaxation with the spectral element basis is fairly effective. The results using this combination are p sensitive in both one and two dimensions, but reasonable convergence rates can still be achieved for moderate values of p and isotropic meshes. A competitive alternative is a block Gauss-Seidel relaxation. This actually out performs a more expensive line relaxation when the mesh is isotropic. When the mesh becomes highly anisotropic, the implicit line method and the Gauss-Seidel implicit line method are the only effective schemes. Adding the Gauss-Seidel terms to the implicit line method gives a significant improvement over the line relaxation method.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening
NASA Technical Reports Server (NTRS)
Diskin, Boris
1999-01-01
This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).
NASA Technical Reports Server (NTRS)
Tsiveriotis, K.; Brown, R. A.
1993-01-01
A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
NASA Astrophysics Data System (ADS)
Chiron, L.; Oger, G.; de Leffe, M.; Le Touzé, D.
2018-02-01
While smoothed-particle hydrodynamics (SPH) simulations are usually performed using uniform particle distributions, local particle refinement techniques have been developed to concentrate fine spatial resolutions in identified areas of interest. Although the formalism of this method is relatively easy to implement, its robustness at coarse/fine interfaces can be problematic. Analysis performed in [16] shows that the radius of refined particles should be greater than half the radius of unrefined particles to ensure robustness. In this article, the basics of an Adaptive Particle Refinement (APR) technique, inspired by AMR in mesh-based methods, are presented. This approach ensures robustness with alleviated constraints. Simulations applying the new formalism proposed achieve accuracy comparable to fully refined spatial resolutions, together with robustness, low CPU times and maintained parallel efficiency.
Computer-Aided Design and Optimization of High-Performance Vacuum Electronic Devices
2006-08-15
approximations to the metric, and space mapping wherein low-accuracy (coarse mesh) solutions can potentially be used more effectively in an...interface and algorithm development. • Work on space - mapping or related methods for utilizing models of varying levels of approximation within an
Hybrid finite difference/finite element immersed boundary method.
E Griffith, Boyce; Luo, Xiaoyu
2017-12-01
The immersed boundary method is an approach to fluid-structure interaction that uses a Lagrangian description of the structural deformations, stresses, and forces along with an Eulerian description of the momentum, viscosity, and incompressibility of the fluid-structure system. The original immersed boundary methods described immersed elastic structures using systems of flexible fibers, and even now, most immersed boundary methods still require Lagrangian meshes that are finer than the Eulerian grid. This work introduces a coupling scheme for the immersed boundary method to link the Lagrangian and Eulerian variables that facilitates independent spatial discretizations for the structure and background grid. This approach uses a finite element discretization of the structure while retaining a finite difference scheme for the Eulerian variables. We apply this method to benchmark problems involving elastic, rigid, and actively contracting structures, including an idealized model of the left ventricle of the heart. Our tests include cases in which, for a fixed Eulerian grid spacing, coarser Lagrangian structural meshes yield discretization errors that are as much as several orders of magnitude smaller than errors obtained using finer structural meshes. The Lagrangian-Eulerian coupling approach developed in this work enables the effective use of these coarse structural meshes with the immersed boundary method. This work also contrasts two different weak forms of the equations, one of which is demonstrated to be more effective for the coarse structural discretizations facilitated by our coupling approach. © 2017 The Authors International Journal for Numerical Methods in Biomedical Engineering Published by John Wiley & Sons Ltd.
Adaptive mesh refinement for characteristic grids
NASA Astrophysics Data System (ADS)
Thornburg, Jonathan
2011-05-01
I consider techniques for Berger-Oliger adaptive mesh refinement (AMR) when numerically solving partial differential equations with wave-like solutions, using characteristic (double-null) grids. Such AMR algorithms are naturally recursive, and the best-known past Berger-Oliger characteristic AMR algorithm, that of Pretorius and Lehner (J Comp Phys 198:10, 2004), recurses on individual "diamond" characteristic grid cells. This leads to the use of fine-grained memory management, with individual grid cells kept in two-dimensional linked lists at each refinement level. This complicates the implementation and adds overhead in both space and time. Here I describe a Berger-Oliger characteristic AMR algorithm which instead recurses on null slices. This algorithm is very similar to the usual Cauchy Berger-Oliger algorithm, and uses relatively coarse-grained memory management, allowing entire null slices to be stored in contiguous arrays in memory. The algorithm is very efficient in both space and time. I describe discretizations yielding both second and fourth order global accuracy. My code implementing the algorithm described here is included in the electronic supplementary materials accompanying this paper, and is freely available to other researchers under the terms of the GNU general public license.
NASA Astrophysics Data System (ADS)
Sides, Scott; Jamroz, Ben; Crockett, Robert; Pletzer, Alexander
2012-02-01
Self-consistent field theory (SCFT) for dense polymer melts has been highly successful in describing complex morphologies in block copolymers. Field-theoretic simulations such as these are able to access large length and time scales that are difficult or impossible for particle-based simulations such as molecular dynamics. The modified diffusion equations that arise as a consequence of the coarse-graining procedure in the SCF theory can be efficiently solved with a pseudo-spectral (PS) method that uses fast-Fourier transforms on uniform Cartesian grids. However, PS methods can be difficult to apply in many block copolymer SCFT simulations (eg. confinement, interface adsorption) in which small spatial regions might require finer resolution than most of the simulation grid. Progress on using new solver algorithms to address these problems will be presented. The Tech-X Chompst project aims at marrying the best of adaptive mesh refinement with linear matrix solver algorithms. The Tech-X code PolySwift++ is an SCFT simulation platform that leverages ongoing development in coupling Chombo, a package for solving PDEs via block-structured AMR calculations and embedded boundaries, with PETSc, a toolkit that includes a large assortment of sparse linear solvers.
Biological decomposition efficiency in different woodland soils.
Herlitzius, H
1983-03-01
The decomposition (meaning disappearance) of different leaf types and artificial leaves made from cellulose hydrate foil was studied in three forests - an alluvial forest (Ulmetum), a beech forest on limestone soil (Melico-Fagetum), and a spruce forest in soil overlying limestone bedrock.Fine, medium, and coarse mesh litter bags of special design were used to investigate the roles of abiotic factors, microorganisms, and meso- and macrofauna in effecting decomposition in the three habitats. Additionally, the experimental design was carefully arranged so as to provide information about the effects on decomposition processes of the duration of exposure and the date or moment of exposure. 1. Exposure of litter samples oor 12 months showed: a) Litter enclosed in fine mesh bags decomposed to some 40-44% of the initial amount placed in each of the three forests. Most of this decomposition can be attributed to abiotic factors and microoganisms. b) Litter placed in medium mesh litter bags reduced by ca. 60% in alluvial forest, ca. 50% in beech forest and ca. 44% in spruce forest. c) Litter enclosed in coarse mesh litter bags was reduced by 71% of the initial weights exposed in alluvial and beech forests; in the spruce forest decomposition was no greater than observed with fine and medium mesh litter bags. Clearly, in spruce forest the macrofauna has little or no part to play in effecting decomposition. 2. Sequential month by month exposure of hazel leaves and cellulose hydrate foil in coarse mesh litter bags in all three forests showed that one month of exposure led to only slight material losses, they did occur smallest between March and May, and largest between June and October/November. 3. Coarse mesh litter bags containing either hazel or artificial leaves of cellulose hydrate foil were exposed to natural decomposition processes in December 1977 and subsampled monthly over a period of one year, this series constituted the From-sequence of experiments. Each of the From-sequence samples removed was immediately replaced by a fresh litter bag which was left in place until December 1978, this series constituted the To-sequence of experiments. The results arising from the designated From- and To-sequences showed: a) During the course of one year hazel leaves decomposed completely in alluvial forest, almost completely in beech forest but to only 50% of the initial value in spruce forest. b) Duration of exposure and not the date of exposure is the major controlling influence on decomposition in alluvial forest, a characteristic reflected in the mirror-image courses of the From- and To-sequences curves with respect to the abscissa or time axis. Conversely the date of exposure and not the duration of exposure is the major controlling influence on decomposition in the spruce forest, a characteristic reflected in the mirror-image courses of the From-and To-sequences with respect to the ordinate or axis of percentage decomposition. c) Leaf powder amendment increased the decomposition rate of the hazel and cellulose hydrate leaves in the spruce forest but had no significant effect on their decomposition rate in alluvial and beech forests. It is concluded from this, and other evidence, that litter amendment by leaf fragments of phytophage frass in sites of low biological decomposition activity (eg. spruce) enhances decomposition processes. d) The time course of hazel leaf decomposition in both alluvial and beech forest is sigmoidal. Three s-phases are distinguished and correspond to the activity of microflora/microfauna, mesofauna/macrofauna, and then microflora/microfauna again. In general, the sigmoidal pattern of the curve can be considered valid for all decomposition processes occurring in terrestrial situations. It is contended that no decomposition (=disappearance) curve actually follows an e-type exponential function. A logarithmic linear regression can be constructed from the sigmoid curve data and although this facilitates inter-system comparisons it does not clearly express the dynamics of decomposition. 4. The course of the curve constructed from information about the standard deviations of means derived from the From- and To-sequence data does reflect the dynamics of litter decomposition. The three s-phases can be recognised and by comparing the actual From-sequence deviation curve with a mirror inversion representation of the To-sequence curve it is possible to determine whether decomposition is primarily controlled by the duration of exposure or the date of exposure. As is the case for hazel leaf decomposition in beech forest intermediate conditions can be readily recognised.
Do Invertebrate Activity and Current Velocity Affect Fungal Assemblage Structure in Leaves?
NASA Astrophysics Data System (ADS)
Ferreira, Verónica; Graça, Manuel A. S.
2006-02-01
In this study we assessed the effect of current velocity and shredder presence, manipulated in artificial channels, on the structure of the fungal assemblage colonizing alder (Alnus glutinosa (L.) Gaertner) leaves incubated in coarse and fine mesh bags. Fungal sporulation rates, cumulative conidial production and number of species of aquatic hyphomycetes were higher in leaves exposed to high rather than to low current velocity. The opposite was observed regarding Simpson's index (D) on the fungal assemblage. Some species of aquatic hyphomycetes were consistently stimulated in high current channels. No effect of shredders or of mesh type was observed.
Generating unstructured nuclear reactor core meshes in parallel
Jain, Rajeev; Tautges, Timothy J.
2014-10-24
Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less
The optimization of high resolution topographic data for 1D hydrodynamic models
NASA Astrophysics Data System (ADS)
Ales, Ronovsky; Michal, Podhoranyi
2016-06-01
The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.
The optimization of high resolution topographic data for 1D hydrodynamic models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ales, Ronovsky, E-mail: ales.ronovsky@vsb.cz; Michal, Podhoranyi
2016-06-08
The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domainmore » and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.« less
Caught in a net: Retention efficiency of microplankton ≥ 10 and < 50 μm collected on mesh netting
NASA Astrophysics Data System (ADS)
Molina, Vanessa; Robbins-Wamsley, Stephanie H.; Riley, Scott C.; First, Matthew R.; Drake, Lisa A.
2018-03-01
Living organisms ≥ 10 μm and < 50 μm in ballast water discharged from ships are typically collected by filtering samples through a monofilament mesh net with pore openings sized to retain organisms ≥ 10 μm. This (or any) filtering method does not result in perfect size fractionation, and it can induce stress, mortality, and loss of organisms that, in turn, may underestimate the concentration of organisms within samples. To address this loss, the retention efficiency (RE) was determined for six filtration approaches using laboratory cultures of microalgae and ambient marine organisms. The approaches employed a membrane filter or mesh nettings of different compositions (nylon, stainless steel, polyester, and polycarbonate), nominal pore sizes (5, 7, and 10 μm), and filtering sequences (e.g., pre-filtering water through a coarse filter). Additionally, in trials with polycarbonate track etched (PCTE) membrane filters, water was amended with particulate material to increase turbidity. Organisms ≥ 10 μm were counted in the material retained on the filter (the filtrand), the material passing through the filter (the filtrate), and the whole water (i.e., unfiltered water). In addition, variable fluorescence fluorometry was used to gauge the relative photochemical yield of phytoplankton-a proximal measurement of the physiological status of phytoplankton-in the size fractions. Further, the mesh types and filters were examined using scanning electron microscopy, which showed irregular openings. The RE of cultured organisms-calculated as the concentration in the filtrand relative to combined concentration in the filtrand and the filtrate-was high for all filtration approaches when laboratory cultures were assessed (> 93%), but RE ranged from 66 to 98% when mixed assemblages of ambient organisms were evaluated. Although PCTE membrane filters had the highest RE (98%), it was not significantly higher than the efficiencies of the 7-μm polyester, Double 7-μm polyester, and Dual 35-μm and 7-μm polyester approaches, but it was significantly higher than the 5-μm nylon and 5-μm stainless steel techniques. This result suggests that PCTE membrane filters perform comparably to 7-μm polyester meshes, so that any of these approaches could be used for concentrating organisms. However, the potential for handling loss is inherently lower for one rinsing step rather than two. Therefore, it is recommended that, either PCTE filters or 7-μm polyester mesh could be used to concentrate organisms ≥ 10 μm and < 50 μm. In trials conducted using a 10-μm PCTE filters with water amended to increase the particulate concentration, no significant difference in RE of ambient organisms was found compared to unamended water. Finally, photochemical yield did not vary significantly between organisms in the filtrand or filtrate, regardless of the filtration approach used.
Full-mesh T- and O-band wavelength router based on arrayed waveguide gratings.
Idris, Nazirul A; Yoshizawa, Katsumi; Tomomatsu, Yasunori; Sudo, Makoto; Hajikano, Tadashi; Kubo, Ryogo; Zervas, Georgios; Tsuda, Hiroyuki
2016-01-11
We propose an ultra-broadband full-mesh wavelength router supporting the T- and O-bands using 3 stages of cascaded arrayed waveguide gratings (AWGs). The router architecture is based on a combination of waveband and channel routing by coarse and fine AWGs, respectively. We fabricated several T-band-specific silica-based AWGs and quantum dot semiconductor optical ampliers as part of the router, and demonstrated 10 Gbps data transmission for several wavelengths throughout a range of 7.4 THz. The power penalties were below 1 dB. Wavelength routing was also demonstrated, where tuning time within a 9.4-nm-wide waveband was below 400 ms.
Recycled Coarse Aggregate Produced by Pulsed Discharge in Water
NASA Astrophysics Data System (ADS)
Namihira, Takao; Shigeishi, Mitsuhiro; Nakashima, Kazuyuki; Murakami, Akira; Kuroki, Kaori; Kiyan, Tsuyoshi; Tomoda, Yuichi; Sakugawa, Takashi; Katsuki, Sunao; Akiyama, Hidenori; Ohtsu, Masayasu
In Japan, the recycling ratio of concrete scraps has been kept over 98 % after the Law for the Recycling of Construction Materials was enforced in 2000. In the present, most of concrete scraps were recycled as the Lower Subbase Course Material. On the other hand, it is predicted to be difficult to keep this higher recycling ratio in the near future because concrete scraps increase rapidly and would reach to over 3 times of present situation in 2010. In addition, the demand of concrete scraps as the Lower Subbase Course Material has been decreased. Therefore, new way to reuse concrete scraps must be developed. Concrete scraps normally consist of 70 % of coarse aggregate, 19 % of water and 11 % of cement. To obtain the higher recycling ratio, the higher recycling ratio of coarse aggregate is desired. In this paper, a new method for recycling coarse aggregate from concrete scraps has been developed and demonstrated. The system includes a Marx generator and a point to hemisphere mesh electrode immersed in water. In the demonstration, the test piece of concrete scrap was located between the electrodes and was treated by the pulsed discharge. After discharge treatment of test piece, the recycling coarse aggregates were evaluated under JIS and TS and had enough quality for utilization as the coarse aggregate.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2016-12-01
There is a need for hydrological land surface schemes that can link to atmospheric models, provide hydrological prediction at multiple scales and guide the development of multiple objective water predictive systems. Distributed raster-based models suffer from an overrepresentation of topography, leading to wasted computational effort that increases uncertainty due to greater numbers of parameters and initial conditions. The Canadian Hydrological Model (CHM) is a modular, multiphysics, spatially distributed modelling framework designed for representing hydrological processes, including those that operate in cold-regions. Unstructured meshes permit variable spatial resolution, allowing coarse resolutions at low spatial variability and fine resolutions as required. Model uncertainty is reduced by lessening the necessary computational elements relative to high-resolution rasters. CHM uses a novel multi-objective approach for unstructured triangular mesh generation that fulfills hydrologically important constraints (e.g., basin boundaries, water bodies, soil classification, land cover, elevation, and slope/aspect). This provides an efficient spatial representation of parameters and initial conditions, as well as well-formed and well-graded triangles that are suitable for numerical discretization. CHM uses high-quality open source libraries and high performance computing paradigms to provide a framework that allows for integrating current state-of-the-art process algorithms. The impact of changes to model structure, including individual algorithms, parameters, initial conditions, driving meteorology, and spatial/temporal discretization can be easily tested. Initial testing of CHM compared spatial scales and model complexity for a spring melt period at a sub-arctic mountain basin. The meshing algorithm reduced the total number of computational elements and preserved the spatial heterogeneity of predictions.
Application of Interface Technology in Nonlinear Analysis of a Stitched/RFI Composite Wing Stub Box
NASA Technical Reports Server (NTRS)
Wang, John T.; Ransom, Jonathan B.
1997-01-01
A recently developed interface technology was successfully employed in the geometrically nonlinear analysis of a full-scale stitched/RFI composite wing box loaded in bending. The technology allows mismatched finite element models to be joined in a variationally consistent manner and reduces the modeling complexity by eliminating transition meshing. In the analysis, local finite element models of nonlinearly deformed wide bays of the wing box are refined without the need for transition meshing to the surrounding coarse mesh. The COMET-AR finite element code, which has the interface technology capability, was used to perform the analyses. The COMET-AR analysis is compared to both a NASTRAN analysis and to experimental data. The interface technology solution is shown to be in good agreement with both. The viability of interface technology for coupled global/local analysis of large scale aircraft structures is demonstrated.
A methodology for quadrilateral finite element mesh coarsening
Staten, Matthew L.; Benzley, Steven; Scott, Michael
2008-03-27
High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less
Robust and efficient overset grid assembly for partitioned unstructured meshes
NASA Astrophysics Data System (ADS)
Roget, Beatrice; Sitaraman, Jayanarayanan
2014-03-01
This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.
Hierarchical Boltzmann simulations and model error estimation
NASA Astrophysics Data System (ADS)
Torrilhon, Manuel; Sarna, Neeraj
2017-08-01
A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.
Fully automatic hp-adaptivity for acoustic and electromagnetic scattering in three dimensions
NASA Astrophysics Data System (ADS)
Kurtz, Jason Patrick
We present an algorithm for fully automatic hp-adaptivity for finite element approximations of elliptic and Maxwell boundary value problems in three dimensions. The algorithm automatically generates a sequence of coarse grids, and a corresponding sequence of fine grids, such that the energy norm of the error decreases exponentially with respect to the number of degrees of freedom in either sequence. At each step, we employ a discrete optimization algorithm to determine the refinements for the current coarse grid such that the projection-based interpolation error for the current fine grid solution decreases with an optimal rate with respect to the number of degrees of freedom added by the refinement. The refinements are restricted only by the requirement that the resulting mesh is at most 1-irregular, but they may be anisotropic in both element size h and order of approximation p. While we cannot prove that our method converges at all, we present numerical evidence of exponential convergence for a diverse suite of model problems from acoustic and electromagnetic scattering. In particular we show that our method is well suited to the automatic resolution of exterior problems truncated by the introduction of a perfectly matched layer. To enable and accelerate the solution of these problems on commodity hardware, we include a detailed account of three critical aspects of our implementation, namely an efficient implementation of sum factorization, several efficient interfaces to the direct multi-frontal solver MUMPS, and some fast direct solvers for the computation of a sequence of nested projections.
Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.
2012-06-21
We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less
Fog collecting biomimetic surfaces: Influence of microstructure and wettability.
Azad, M A K; Ellerbrok, D; Barthlott, W; Koch, K
2015-01-19
We analyzed the fog collection efficiency of three different sets of samples: replica (with and without microstructures), copper wire (smooth and microgrooved) and polyolefin mesh (hydrophilic, superhydrophilic and hydrophobic). The collection efficiency of the samples was compared in each set separately to investigate the influence of microstructures and/or the wettability of the surfaces on fog collection. Based on the controlled experimental conditions chosen here large differences in the efficiency were found. We found that microstructured plant replica samples collected 2-3 times higher amounts of water than that of unstructured (smooth) samples. Copper wire samples showed similar results. Moreover, microgrooved wires had a faster dripping of water droplets than that of smooth wires. The superhydrophilic mesh tested here was proved more efficient than any other mesh samples with different wettability. The amount of collected fog by superhydrophilic mesh was about 5 times higher than that of hydrophilic (untreated) mesh and was about 2 times higher than that of hydrophobic mesh.
Facile Fabrication of a Polyethylene Mesh for Oil/Water Separation in a Complex Environment.
Zhao, Tianyi; Zhang, Dongmei; Yu, Cunming; Jiang, Lei
2016-09-14
Low cost, eco-friendly, and easily scaled-up processes are needed to fabricate efficient oil/water separation materials, especially those useful in harsh environments such as highly acidic, alkaline, and salty environments, to deal with serious oil spills and industrial organic pollutants. Herein, a highly efficient oil/water separation mesh with durable chemical stability was fabricated by simply scratching and pricking a conventional polyethylene (PE) film. Multiscaled morphologies were obtained by this scratching and pricking process and provided the mesh with a special wettability performance termed superhydrophobicity, superoleophilicity, and low water adhesion, while the inert chemical properties of PE delivered chemical etching resistance to the fabricated mesh. In addition to a highly efficient oil/corrosive liquid separation, the fabricated PE mesh was also reusable and exhibited ultrafast oil/water separation solely by gravity. The easy operation, chemical durability, reusability, and efficiency of the novel PE mesh give it high potential for use in industrial and consumer applications.
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Saltzman, Jeff S.
1992-01-01
We describe the development of a structured adaptive mesh algorithm (AMR) for the Connection Machine-2 (CM-2). We develop a data layout scheme that preserves locality even for communication between fine and coarse grids. On 8K of a 32K machine we achieve performance slightly less than 1 CPU of the Cray Y-MP. We apply our algorithm to an inviscid compressible flow problem.
On Spurious Numerics in Solving Reactive Equations
NASA Technical Reports Server (NTRS)
Kotov, D. V; Yee, H. C.; Wang, W.; Shu, C.-W.
2013-01-01
The objective of this study is to gain a deeper understanding of the behavior of high order shock-capturing schemes for problems with stiff source terms and discontinuities and on corresponding numerical prediction strategies. The studies by Yee et al. (2012) and Wang et al. (2012) focus only on solving the reactive system by the fractional step method using the Strang splitting (Strang 1968). It is a common practice by developers in computational physics and engineering simulations to include a cut off safeguard if densities are outside the permissible range. Here we compare the spurious behavior of the same schemes by solving the fully coupled reactive system without the Strang splitting vs. using the Strang splitting. Comparison between the two procedures and the effects of a cut off safeguard is the focus the present study. The comparison of the performance of these schemes is largely based on the degree to which each method captures the correct location of the reaction front for coarse grids. Here "coarse grids" means standard mesh density requirement for accurate simulation of typical non-reacting flows of similar problem setup. It is remarked that, in order to resolve the sharp reaction front, local refinement beyond standard mesh density is still needed.
Evaluation of flotation for purification of pyrite for use in thermal batteries
NASA Astrophysics Data System (ADS)
Guidotti, R. A.; Reinhardt, F. W.
1992-07-01
The purification of pyrite (FeS2) used in Li-alloy/FeS2 thermal batteries by the physical process of flotation was evaluated for reduction of the quartz impurity. The process was compared to the standard process of leaching with concentrated hydrofluoric acid. Flotation was an attractive alternative because it avoided many of the safety and environmental concerns posed by the use of concentrated HF. The effects of particle size and initial purity of the pyrite feed material upon the final purity and yield of the product concentrate were examined for batch sizes from 3.5 to 921 kg. Feed materials as coarse as 8 mm and as fine as -325 mesh were treated; the coarse pyrite was ground wet in a rod mill or dry in a vibratory mill to -230 mesh prior to flotation. Both the HF-leached and the flotation-treated pyrite were leached with HCI (1:1 v/v) to remove acid-soluble impurities. The flotation-purified pyrite concentrates were formulated into catholytes; their electrochemical performance was evaluated in both single cells and 5-cell batteries for comparison to data generated under the same discharge conditions for catholytes formulated with HF/HCI purified pyrite.
NASA Astrophysics Data System (ADS)
Kong, Fande; Cai, Xiao-Chuan
2017-07-01
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear in many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexact Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here "geometry" includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.
Ab initio velocity-field curves in monoclinic β-Ga2O3
NASA Astrophysics Data System (ADS)
Ghosh, Krishnendu; Singisetti, Uttam
2017-07-01
We investigate the high-field transport in monoclinic β-Ga2O3 using a combination of ab initio calculations and full band Monte Carlo (FBMC) simulation. Scattering rate calculation and the final state selection in the FBMC simulation use complete wave-vector (both electron and phonon) and crystal direction dependent electron phonon interaction (EPI) elements. We propose and implement a semi-coarse version of the Wannier-Fourier interpolation method [Giustino et al., Phys. Rev. B 76, 165108 (2007)] for short-range non-polar optical phonon (EPI) elements in order to ease the computational requirement in FBMC simulation. During the interpolation of the EPI, the inverse Fourier sum over the real-space electronic grids is done on a coarse mesh while the unitary rotations are done on a fine mesh. This paper reports the high field transport in monoclinic β-Ga2O3 with deep insight into the contribution of electron-phonon interactions and velocity-field characteristics for electric fields ranging up to 450 kV/cm in different crystal directions. A peak velocity of 2 × 107 cm/s is estimated at an electric field of 200 kV/cm.
Kong, Fande; Cai, Xiao-Chuan
2017-03-24
Nonlinear fluid-structure interaction (FSI) problems on unstructured meshes in 3D appear many applications in science and engineering, such as vibration analysis of aircrafts and patient-specific diagnosis of cardiovascular diseases. In this work, we develop a highly scalable, parallel algorithmic and software framework for FSI problems consisting of a nonlinear fluid system and a nonlinear solid system, that are coupled monolithically. The FSI system is discretized by a stabilized finite element method in space and a fully implicit backward difference scheme in time. To solve the large, sparse system of nonlinear algebraic equations at each time step, we propose an inexactmore » Newton-Krylov method together with a multilevel, smoothed Schwarz preconditioner with isogeometric coarse meshes generated by a geometry preserving coarsening algorithm. Here ''geometry'' includes the boundary of the computational domain and the wet interface between the fluid and the solid. We show numerically that the proposed algorithm and implementation are highly scalable in terms of the number of linear and nonlinear iterations and the total compute time on a supercomputer with more than 10,000 processor cores for several problems with hundreds of millions of unknowns.« less
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2014-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
Impact of Variable-Resolution Meshes on Regional Climate Simulations
NASA Astrophysics Data System (ADS)
Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.
2013-12-01
The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.
LES on unstructured deforming meshes: Towards reciprocating IC engines
NASA Technical Reports Server (NTRS)
Haworth, D. C.; Jansen, K.
1996-01-01
A variable explicit/implicit characteristics-based advection scheme that is second-order accurate in space and time has been developed recently for unstructured deforming meshes (O'Rourke & Sahota 1996a). To explore the suitability of this methodology for Large-Eddy Simulation (LES), three subgrid-scale turbulence models have been implemented in the CHAD CFD code (O'Rourke & Sahota 1996b): a constant-coefficient Smagorinsky model, a dynamic Smagorinsky model for flows having one or more directions of statistical homogeneity, and a Lagrangian dynamic Smagorinsky model for flows having no spatial or temporal homogeneity (Meneveau et al. 1996). Computations have been made for three canonical flows, progressing towards the intended application of in-cylinder flow in a reciprocating engine. Grid sizes were selected to be comparable to the coarsest meshes used in earlier spectral LES studies. Quantitative results are reported for decaying homogeneous isotropic turbulence, and for a planar channel flow. Computations are compared to experimental measurements, to Direct-Numerical Simulation (DNS) data, and to Rapid-Distortion Theory (RDT) where appropriate. Generally satisfactory evolution of first and second moments is found on these coarse meshes; deviations are attributed to insufficient mesh resolution. Issues include mesh resolution and computational requirements for a specified level of accuracy, analytic characterization of the filtering implied by the numerical method, wall treatment, and inflow boundary conditions. To resolve these issues, finer-mesh simulations and computations of a simplified axisymmetric reciprocating piston-cylinder assembly are in progress.
Array-based Hierarchical Mesh Generation in Parallel
Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...
2015-11-03
In this paper, we describe an array-based hierarchical mesh generation capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial mesh that can be used for a number of purposes such as multi-level methods to generating large meshes. The capability is developed under the parallel mesh framework “Mesh Oriented dAtaBase” a.k.a MOAB. We describe the underlying data structures and algorithms to generate such hierarchies and present numerical results for computational efficiency and mesh quality. Inmore » conclusion, we also present results to demonstrate the applicability of the developed capability to a multigrid finite-element solver.« less
GRAPE- TWO-DIMENSIONAL GRIDS ABOUT AIRFOILS AND OTHER SHAPES BY THE USE OF POISSON'S EQUATION
NASA Technical Reports Server (NTRS)
Sorenson, R. L.
1994-01-01
The ability to treat arbitrary boundary shapes is one of the most desirable characteristics of a method for generating grids, including those about airfoils. In a grid used for computing aerodynamic flow over an airfoil, or any other body shape, the surface of the body is usually treated as an inner boundary and often cannot be easily represented as an analytic function. The GRAPE computer program was developed to incorporate a method for generating two-dimensional finite-difference grids about airfoils and other shapes by the use of the Poisson differential equation. GRAPE can be used with any boundary shape, even one specified by tabulated points and including a limited number of sharp corners. The GRAPE program has been developed to be numerically stable and computationally fast. GRAPE can provide the aerodynamic analyst with an efficient and consistent means of grid generation. The GRAPE procedure generates a grid between an inner and an outer boundary by utilizing an iterative procedure to solve the Poisson differential equation subject to geometrical restraints. In this method, the inhomogeneous terms of the equation are automatically chosen such that two important effects are imposed on the grid. The first effect is control of the spacing between mesh points along mesh lines intersecting the boundaries. The second effect is control of the angles with which mesh lines intersect the boundaries. Along with the iterative solution to Poisson's equation, a technique of coarse-fine sequencing is employed to accelerate numerical convergence. GRAPE program control cards and input data are entered via the NAMELIST feature. Each variable has a default value such that user supplied data is kept to a minimum. Basic input data consists of the boundary specification, mesh point spacings on the boundaries, and mesh line angles at the boundaries. Output consists of a dataset containing the grid data and, if requested, a plot of the generated mesh. The GRAPE program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 135K (octal) of 60 bit words. For plotted output the commercially available DISSPLA graphics software package is required. The GRAPE program was developed in 1980.
Bui, Huu Phuoc; Tomar, Satyendra; Courtecuisse, Hadrien; Audette, Michel; Cotin, Stéphane; Bordas, Stéphane P A
2018-05-01
An error-controlled mesh refinement procedure for needle insertion simulations is presented. As an example, the procedure is applied for simulations of electrode implantation for deep brain stimulation. We take into account the brain shift phenomena occurring when a craniotomy is performed. We observe that the error in the computation of the displacement and stress fields is localised around the needle tip and the needle shaft during needle insertion simulation. By suitably and adaptively refining the mesh in this region, our approach enables to control, and thus to reduce, the error whilst maintaining a coarser mesh in other parts of the domain. Through academic and practical examples we demonstrate that our adaptive approach, as compared with a uniform coarse mesh, increases the accuracy of the displacement and stress fields around the needle shaft and, while for a given accuracy, saves computational time with respect to a uniform finer mesh. This facilitates real-time simulations. The proposed methodology has direct implications in increasing the accuracy, and controlling the computational expense of the simulation of percutaneous procedures such as biopsy, brachytherapy, regional anaesthesia, or cryotherapy. Moreover, the proposed approach can be helpful in the development of robotic surgeries because the simulation taking place in the control loop of a robot needs to be accurate, and to occur in real time. Copyright © 2018 John Wiley & Sons, Ltd.
Laser-structured Janus wire mesh for efficient oil-water separation.
Liu, Yu-Qing; Han, Dong-Dong; Jiao, Zhi-Zhen; Liu, Yan; Jiang, Hao-Bo; Wu, Xuan-Hang; Ding, Hong; Zhang, Yong-Lai; Sun, Hong-Bo
2017-11-23
We report here the fabrication of a Janus wire mesh by a combined process of laser structuring and fluorosilane/graphene oxide (GO) modification of the two sides of the mesh, respectively, toward its applications in efficient oil/water separation. Femtosecond laser processing has been employed to make different laser-induced periodic surface structures (LIPSS) on each side of the mesh. Surface modification with fluorosilane on one side and GO on the other side endows the two sides of the Janus mesh with distinct wettability. Thus, one side is superhydrophobic and superoleophilic in air, and the other side is superhydrophilic in air and superoleophobic under water. As a proof of concept, we demonstrated the separation of light/heavy oil and water mixtures using this Janus mesh. To realize an efficient separation, the intrusion pressure that is dominated by the wire mesh framework and the wettability should be taken into account. Our strategy may open up a new way to design and fabricate Janus structures with distinct wettability; and the resultant Janus mesh may find broad applications in the separation of oil contaminants from water.
NASA Astrophysics Data System (ADS)
Hou, Kun; Zeng, Yicheng; Zhou, Cailong; Chen, Jiahui; Wen, Xiufang; Xu, Shouping; Cheng, Jiang; Lin, Yingguang; Pi, Pihui
2017-09-01
A durable underwater superoleophobic mesh was conveniently prepared by layer-by-layer (LBL) assembly of poly (diallyldimethylammonium chloride) (PDDA) and halloysite nanotubes (HNTs) on a stainless steel mesh. The hierarchical structure and roughness of the PDDA/HNTs coating surface were controlled by adjusting the number of layer deposition cycles. When the PDDA/HNTs coating with 10 deposition cycles was decorated on the mesh with pore size of about 54 μm, the underwater superoleophobic mesh was obtained. The as-prepared underwater superoleophobic PDDA/HNTs decorated mesh exhibits outstanding oil-water separation performance with a separation efficiency of over 97% for various oil/water mixtures, which allowed water to pass through while repelled oil completely. In addition, the as-prepared decorated mesh still maintained high separation efficiency above 97% after repeated 20 separation times for hexane/water mixture or chloroform/water mixture. More importantly, the as-prepared decorated mesh is durable enough to resist chemical and mechanical challenges, such as strong alkaline, salt aqueous and sand abrasion. Therefore, the as-prepared decorated mesh has practical utility in oil-water separation due to its stable oil-water performance, remarkable chemical and mechanical durability and the facile and eco-friendly preparation process.
NASA Astrophysics Data System (ADS)
Yin, Kai; Yang, Shuai; Dong, Xinran; Chu, Dongkai; Duan, Ji-An; He, Jun
2018-06-01
We report a simple, efficient method to fabricate micro/nanoscale hierarchical structures on one side of polytetrafluoroethylene mesh surfaces, using one-step femtosecond laser direct writing technology. The laser-treated surface exhibits superhydrophobicity in air and superaerophilicity in water, resulting in the mesh possessing the hydrophobic/superhydrophobic asymmetrical property. Bubbles can pass through the mesh from the untreated side to the laser-treated side but cannot pass through the mesh in the opposite direction. The asymmetrical mesh can therefore be designed for the directional transportation and continuous collection of gas bubbles in aqueous environments. Furthermore, the asymmetrical mesh shows excellent stability during corrosion and abrasion tests. These findings may provide an efficient route for fabricating a durable asymmetrical mesh for the directional and continuous transport of gas bubbles.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Progressive simplification and transmission of building polygons based on triangle meshes
NASA Astrophysics Data System (ADS)
Li, Hongsheng; Wang, Yingjie; Guo, Qingsheng; Han, Jiafu
2010-11-01
Digital earth is a virtual representation of our planet and a data integration platform which aims at harnessing multisource, multi-resolution, multi-format spatial data. This paper introduces a research framework integrating progressive cartographic generalization and transmission of vector data. The progressive cartographic generalization provides multiple resolution data from coarse to fine as key scales and increments between them which is not available in traditional generalization framework. Based on the progressive simplification algorithm, the building polygons are triangulated into meshes and encoded according to the simplification sequence of two basic operations, edge collapse and vertex split. The map data at key scales and encoded increments between them are stored in a multi-resolution file. As the client submits requests to the server, the coarsest map is transmitted first and then the increments. After data decoding and mesh refinement the building polygons with more details will be visualized. Progressive generalization and transmission of building polygons is demonstrated in the paper.
Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, Haoqiang; VanderWijngaart, Rob F.
2003-01-01
We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.
NASA Astrophysics Data System (ADS)
McComiskey, A. C.; Telg, H.; Sheridan, P. J.; Kassianov, E.
2017-12-01
The coarse mode contribution to the aerosol radiative effect in a range of clean and turbid aerosol regimes has not been well quantified. While the coarse-mode radiative effect in turbid conditions is generally assumed to be consequential, the effect in clean conditions has likely been underestimated. We survey ground-based in situ measurements of the coarse mode fraction of aerosol optical properties measured around the globe over the past 20 years by the DOE Atmospheric Radiation Measurement Facility and the NOAA Global Monitoring Division. The aerosol forcing efficiency is presented, allowing an evaluation of where the aerosol coarse mode might be climatologically significant.
Jake Musslewhite; Mark S. Wipfli
2004-01-01
We examined the transport of invertebrates and coarse organic detritus from headwater streams draining timber harvest units in a selective timber harvesting study, alternatives to clearcutting (ATC) in southeastern Alaska. Transport in 17 small streams (mean measured discharge range: 1.2 to 14.6 L/s) was sampled with 250- µ m-mesh drift nets in spring, summer, and fall...
Park, Kwang-Tae; Kim, Han-Jung; Park, Min-Joon; Jeong, Jun-Ho; Lee, Jihye; Choi, Dae-Geun; Lee, Jung-Ho; Choi, Jun-Hyuk
2015-01-01
In recent years, inorganic/organic hybrid solar cell concept has received growing attention for alternative energy solution because of the potential for facile and low-cost fabrication and high efficiency. Here, we report highly efficient hybrid solar cells based on silicon nanowires (SiNWs) and poly(3,4-ethylenedioxythiophene):poly(styrenesulfonate) (PEDOT:PSS) using transfer-imprinted metal mesh front electrodes. Such a structure increases the optical absorption and shortens the carrier transport distance, thus, it greatly increases the charge carrier collection efficiency. Compared with hybrid cells formed using indium tin oxide (ITO) electrodes, we find an increase in power conversion efficiency from 5.95% to 13.2%, which is attributed to improvements in both the electrical and optical properties of the Au mesh electrode. Our fabrication strategy for metal mesh electrode is suitable for the large-scale fabrication of flexible transparent electrodes, paving the way towards low-cost, high-efficiency, flexible solar cells. PMID:26174964
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
NASA Technical Reports Server (NTRS)
Anderson, B. H.; Benson, T. J.
1983-01-01
A supersonic three-dimensional viscous forward-marching computer design code called PEPSIS is used to obtain a numerical solution of the three-dimensional problem of the interaction of a glancing sidewall oblique shock wave and a turbulent boundary layer. Very good results are obtained for a test case that was run to investigate the use of the wall-function boundary-condition approximation for a highly complex three-dimensional shock-boundary layer interaction. Two additional test cases (coarse mesh and medium mesh) are run to examine the question of near-wall resolution when no-slip boundary conditions are applied. A comparison with experimental data shows that the PEPSIS code gives excellent results in general and is practical for three-dimensional supersonic inlet calculations.
Dynamic model of open shell structures buried in poroelastic soils
NASA Astrophysics Data System (ADS)
Bordón, J. D. R.; Aznárez, J. J.; Maeso, O.
2017-08-01
This paper is concerned with a three-dimensional time harmonic model of open shell structures buried in poroelastic soils. It combines the dual boundary element method (DBEM) for treating the soil and shell finite elements for modelling the structure, leading to a simple and efficient representation of buried open shell structures. A new fully regularised hypersingular boundary integral equation (HBIE) has been developed to this aim, which is then used to build the pair of dual BIEs necessary to formulate the DBEM for Biot poroelasticity. The new regularised HBIE is validated against a problem with analytical solution. The model is used in a wave diffraction problem in order to show its effectiveness. It offers excellent agreement for length to thickness ratios greater than 10, and relatively coarse meshes. The model is also applied to the calculation of impedances of bucket foundations. It is found that all impedances except the torsional one depend considerably on hydraulic conductivity within the typical frequency range of interest of offshore wind turbines.
A superhydrophobic copper mesh as an advanced platform for oil-water separation
NASA Astrophysics Data System (ADS)
Ren, Guina; Song, Yuanming; Li, Xiangming; Zhou, Yanli; Zhang, Zhaozhu; Zhu, Xiaotao
2018-01-01
Improving the separation efficiency and simplifying the separation process would be highly desired for oil-water separation yet still challenging. Herein, to address this challenge, we fabricated a superhydrophobic copper mesh by an immersion process and exploited it as an advanced platform for oil-water separation. To realize oil-water separation efficiently, the obtained mesh was enfolded directly to form a boat-like device, and it could also be mounted on an open end of a glass barrel to form the oil skimmer device. For these devices, they can collect the floating oils through the pores of the copper mesh while repelling water completely, and the oil collection efficiency is up to 99.5%. Oils collected in the devices can be easily sucked out into a container for storing, without requiring mechanical handing for recycling. Importantly, the miniature boat and the oil skimmer devices can retain their enhanced oil collection efficiency even after 10 cycles of oil-water separation. Moreover, exploiting its superhydrophobicity under oil, the obtained copper mesh was demonstrated as a novel platform to remove tiny water droplets from oil.
NASA Astrophysics Data System (ADS)
Xin, Qin; Yao, Xiaolan; Engelstad, Paal E.
2010-09-01
Wireless Mesh Networking is an emerging communication paradigm to enable resilient, cost-efficient and reliable services for the future-generation wireless networks. We study here the minimum-latency communication primitive of gossiping (all-to-all communication) in multi-hop ad-hoc Wireless Mesh Networks (WMNs). Each mesh node in the WMN is initially given a message and the objective is to design a minimum-latency schedule such that each mesh node distributes its message to all other mesh nodes. Minimum-latency gossiping problem is well known to be NP-hard even for the scenario in which the topology of the WMN is known to all mesh nodes in advance. In this paper, we propose a new latency-efficient approximation scheme that can accomplish gossiping task in polynomial time units in any ad-hoc WMN under consideration of Large Interference Range (LIR), e.g., the interference range is much larger than the transmission range. To the best of our knowledge, it is first time to investigate such a scenario in ad-hoc WMNs under LIR, our algorithm allows the labels (e.g., identifiers) of the mesh nodes to be polynomially large in terms of the size of the WMN, which is the first time that the scenario of large labels has been considered in ad-hoc WMNs under LIR. Furthermore, our gossiping scheme can be considered as a framework which can be easily implied to the scenario under consideration of mobility-related issues since we assume that the mesh nodes have no knowledge on the network topology even for its neighboring mesh nodes.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
NASA Astrophysics Data System (ADS)
Xu, Zhe; Jiang, Deyi; Wei, Zhibo; Chen, Jie; Jing, Jianfeng
2018-01-01
Stainless steel meshes with superhydrophobic surfaces were successfully fabricated via a facile electrophoretic deposition process. The surface morphology and chemical compositions were characterized by a field emission scanning electron microscope (FE-SEM), energy-dispersive X-ray spectroscope (EDS), X-ray diffraction (XRD) and fourier-transform infrared spectrophotometer (FTIR). After stearic acid modification, the obtained nano-aluminum films on stainless steel meshes showed an excellent superhydrophobic properties with a water contact angle of 160° ± 1.2° and a water sliding angle of less than 5°. In addition, on the basis of the superhydrophobic meshes, a simple, continuous oil-water separation apparatus was designed, and the oil-water separation efficiency was up to 95.8% ± 0.9%. Meanwhile, after 20 oil-water separation cycles, the separation efficiency without significant reduction suggested the stable performance of superhydrophobic stainless steel meshes on the oil-water separation. Moreover, the flow rate of oil-water mixture and effective separation length were investigated to determine their effects on the oil-water separation efficiency, respectively. Our work provides a cost-efficient method to prepare stable superhydrophobic nano-Al films on stainless steel meshes, and it has promising practical applications on oil-water separation.
Variation in capture efficiency of a beach seine for small fishes
Parsley, M.J.; Palmer, D.E.; Burkhardt, R.W.
1989-01-01
We determined the capture efficiency of a beach seine as a means of improving abundance estimates of small fishes in littoral areas. Capture efficiency for 14 taxa (individual species or species groups) was determined by seining within an enclosure at night over fine and coarse substrates in the John Day Reservoir, Oregon–Washington. Mean efficiency ranged from 12% for prickly sculpin Cottus asper captured over coarse substrates to 96% for peamouth Mylocheilus caurinus captured over fine substrates. Mean capture efficiency for a taxon (genus or species) was generally higher over fine substrates than over coarse substrates, although mean capture efficiencies over fine substrates were significantly greater for only 3 of 10 taxa. Capture efficiency generally was not influenced by fish density or by water temperature (range, 8–26°C). Conclusions about the relative abundance of taxa captured by seining can change substantially after capture efficiencies are taken into account.
NASA Astrophysics Data System (ADS)
Fernandez, D.; Torregrosa, A.; Weiss-Penzias, P. S.; Oliphant, A. J.; Dodge, C.; Bowman, M.; Wilson, S.; Mairs, A. A.; Gravelle, M.; Barkley, T.
2016-12-01
At multiple sites across central CA, several passive fog water collectors have been deployed for the past 3 years. All of the sites employ standard Raschel polypropylene mesh as the fog collection medium and five of them also integrated a novel polypropylene mesh of German manufacture with a 3-dimensional internal structure. Additionally, six metal mesh manufactured by McMaster-Carr of various hole sizing were coated with a POSS-PEMA substance at the Massachusetts Institute of Technology and deployed in parallel with the Raschel mesh at six distinct locations. Finally, fluorine-free versions of the POSS-PEMA substance were generated by NBD Nanotechnology and coated on a much finer mesh substrate. Three of those and one control (uncoated mesh) were deployed at one of the fog collection sites for one season, along with a standard Raschel mesh. Preliminary results from one intercomparison from just one pair of mesh over two seasons seem to reveal a wind speed and also, possibly, a droplet-size dependence on the fog collection efficiency for the mesh. This study will continue to intercompare the various mesh in conjunction with the wind speed and direction data. If a collection efficiency dependence on mesh size or coating is confirmed, it may point to interesting and relevant mechanisms for fog droplet capture and collection hitherto unobserved in field conditions.
Mines, Levi W. D.; Park, Jae Hong; Mudunkotuwa, Imali A.; Anthony, T. Renée; Grassian, Vicki H.; Peters, Thomas M.
2017-01-01
Porous polyurethane foam was evaluated to replace the eight nylon meshes used as a substrate to collect nanoparticles in the Nanoparticle Respiratory Deposition (NRD) sampler. Cylindrical (25-mm diameter by 40-mm deep) foam with 110 pores per inch was housed in a 25-mm-diameter conductive polypropylene cassette cowl compatible with the NRD sampler. Pristine foam and nylon meshes were evaluated for metals content via elemental analysis. The size-selective collection efficiency of the foam was evaluated using salt (NaCl) and metal fume aerosols in independent tests. Collection efficiencies were compared to the nanoparticulate matter (NPM) criterion and a semi-empirical model for foam. Changes in collection efficiency and pressure drop of the foam and nylon meshes were measured after loading with metal fume particles as measures of substrate performance. Substantially less titanium was found in the foam (0.173 μg sampler−1) compared to the nylon mesh (125 μg sampler−1), improving the detection capabilities of the NRD sampler for titanium dioxide particles. The foam collection efficiency was similar to that of the nylon meshes and the NPM criterion (R2 = 0.98, for NaCl), although the semi-empirical model underestimated the experimental efficiency (R2 = 0.38). The pressure drop across the foam was 8% that of the nylon meshes when pristine and changed minimally with metal fume loading (~ 19 mg). In contrast, the pores of the nylon meshes clogged after loading with ~ 1 mg metal fume. These results indicate that foam is a suitable substrate to collect metal (except for cadmium) nanoparticles in the NRD sampler. PMID:28867869
Li, Jing; Wang, Ruoqi; Su, Zhen; Zhang, Dandan; Li, Heping; Yan, Youwei
2018-10-01
Nowadays, it is extremely urgent to search for efficient and effective catalysts for water purification due to the severe worldwide water-contamination crises. Here, 3D Fe@VO 2 core-shell mesh, a highly efficient catalyst toward removal of organic dyes with excellent recycling ability in the dark is designed and developed for the first time. This novel core-shell structure is actually 304 stainless steel mesh coated by VO 2 , fabricated by an electrophoretic deposition method. In such a core-shell structure, Fe as the core allows much easier separation from the water, endowing the catalyst with a flexible property for easy recycling, while VO 2 as the shell is highly efficient in degradation of organic dyes with the addition of H 2 O 2 . More intriguingly, the 3D Fe@VO 2 core-shell mesh exhibits favorable performance across a wide pH range. The 3D Fe@VO 2 core-shell mesh can decompose organic dyes both in a light-free condition and under visible irradiation. The possible catalytic oxidation mechanism of Fe@VO 2 /H 2 O 2 system is also proposed in this work. Considering its facile fabrication, remarkable catalytic efficiency across a wide pH range, and easy recycling characteristic, the 3D Fe@VO 2 core-shell mesh is a newly developed high-performance catalyst for addressing the universal water crises. Copyright © 2018 Elsevier B.V. All rights reserved.
Progressive Precision Surface Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchaineau, M; Joy, KJ
2002-01-11
We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less
a Non-Overlapping Discretization Method for Partial Differential Equations
NASA Astrophysics Data System (ADS)
Rosas-Medina, A.; Herrera, I.
2013-05-01
Mathematical models of many systems of interest, including very important continuous systems of Engineering and Science, lead to a great variety of partial differential equations whose solution methods are based on the computational processing of large-scale algebraic systems. Furthermore, the incredible expansion experienced by the existing computational hardware and software has made amenable to effective treatment problems of an ever increasing diversity and complexity, posed by engineering and scientific applications. The emergence of parallel computing prompted on the part of the computational-modeling community a continued and systematic effort with the purpose of harnessing it for the endeavor of solving boundary-value problems (BVPs) of partial differential equations. Very early after such an effort began, it was recognized that domain decomposition methods (DDM) were the most effective technique for applying parallel computing to the solution of partial differential equations, since such an approach drastically simplifies the coordination of the many processors that carry out the different tasks and also reduces very much the requirements of information-transmission between them. Ideally, DDMs intend producing algorithms that fulfill the DDM-paradigm; i.e., such that "the global solution is obtained by solving local problems defined separately in each subdomain of the coarse-mesh -or domain-decomposition-". Stated in a simplistic manner, the basic idea is that, when the DDM-paradigm is satisfied, full parallelization can be achieved by assigning each subdomain to a different processor. When intensive DDM research began much attention was given to overlapping DDMs, but soon after attention shifted to non-overlapping DDMs. This evolution seems natural when the DDM-paradigm is taken into account: it is easier to uncouple the local problems when the subdomains are separated. However, an important limitation of non-overlapping domain decompositions, as that concept is usually understood today, is that interface nodes are shared by two or more subdomains of the coarse-mesh and, therefore, even non-overlapping DDMs are actually overlapping when seen from the perspective of the nodes used in the discretization. In this talk we present and discuss a discretization method in which the nodes used are non-overlapping, in the sense that each one of them belongs to one and only one subdomain of the coarse-mesh.
CFD methodology and validation for turbomachinery flows
NASA Astrophysics Data System (ADS)
Hirsch, Ch.
1994-05-01
The essential problem today, in the application of 3D Navier-Stokes simulations to the design and analysis of turbomachinery components, is the validation of the numerical approximation and of the physical models, in particular the turbulence modelling. Although most of the complex 3D flow phenomena occurring in turbomachinery bladings can be captured with relatively coarse meshes, many detailed flow features are dependent on mesh size, on the turbulence and transition models. A brief review of the present state of the art of CFD methodology is given with emphasis on quality and accuracy of numerical approximations related to viscous flow computations. Considerations related to the mesh influence on solution accuracy are stressed. The basic problems of turbulence and transition modelling are discussed next, with a short summary of the main turbulence models and their applications to representative turbomachinery flows. Validations of present turbulence models indicate that none of the available turbulence models is able to predict all the detailed flow behavior in complex flow interactions. In order to identify the phenomena that can be captured on coarser meshes a detailed understanding of the complex 3D flow in compressor and turbines is necessary. Examples of global validations for different flow configurations, representative of compressor and turbine aerodynamics are presented, including secondary and tip clearance flows.
S-HARP: A parallel dynamic spectral partitioner
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Simon, H.
1998-01-01
Computational science problems with adaptive meshes involve dynamic load balancing when implemented on parallel machines. This dynamic load balancing requires fast partitioning of computational meshes at run time. The authors present in this report a fast parallel dynamic partitioner, called S-HARP. The underlying principles of S-HARP are the fast feature of inertial partitioning and the quality feature of spectral partitioning. S-HARP partitions a graph from scratch, requiring no partition information from previous iterations. Two types of parallelism have been exploited in S-HARP, fine grain loop level parallelism and coarse grain recursive parallelism. The parallel partitioner has been implemented in Messagemore » Passing Interface on Cray T3E and IBM SP2 for portability. Experimental results indicate that S-HARP can partition a mesh of over 100,000 vertices into 256 partitions in 0.2 seconds on a 64 processor Cray T3E. S-HARP is much more scalable than other dynamic partitioners, giving over 15 fold speedup on 64 processors while ParaMeTiS1.0 gives a few fold speedup. Experimental results demonstrate that S-HARP is three to 10 times faster than the dynamic partitioners ParaMeTiS and Jostle on six computational meshes of size over 100,000 vertices.« less
A Robust and Scalable Software Library for Parallel Adaptive Refinement on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Lou, John Z.; Norton, Charles D.; Cwik, Thomas A.
1999-01-01
The design and implementation of Pyramid, a software library for performing parallel adaptive mesh refinement (PAMR) on unstructured meshes, is described. This software library can be easily used in a variety of unstructured parallel computational applications, including parallel finite element, parallel finite volume, and parallel visualization applications using triangular or tetrahedral meshes. The library contains a suite of well-designed and efficiently implemented modules that perform operations in a typical PAMR process. Among these are mesh quality control during successive parallel adaptive refinement (typically guided by a local-error estimator), parallel load-balancing, and parallel mesh partitioning using the ParMeTiS partitioner. The Pyramid library is implemented in Fortran 90 with an interface to the Message-Passing Interface (MPI) library, supporting code efficiency, modularity, and portability. An EM waveguide filter application, adaptively refined using the Pyramid library, is illustrated.
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul
2015-12-01
In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less
The Effects of Dissipation and Coarse Grid Resolution for Multigrid in Flow Problems
NASA Technical Reports Server (NTRS)
Eliasson, Peter; Engquist, Bjoern
1996-01-01
The objective of this paper is to investigate the effects of the numerical dissipation and the resolution of the solution on coarser grids for multigrid with the Euler equation approximations. The convergence is accomplished by multi-stage explicit time-stepping to steady state accelerated by FAS multigrid. A theoretical investigation is carried out for linear hyperbolic equations in one and two dimensions. The spectra reveals that for stability and hence robustness of spatial discretizations with a small amount of numerical dissipation the grid transfer operators have to be accurate enough and the smoother of low temporal accuracy. Numerical results give grid independent convergence in one dimension. For two-dimensional problems with a small amount of numerical dissipation, however, only a few grid levels contribute to an increased speed of convergence. This is explained by the small numerical dissipation leading to dispersion. Increasing the mesh density and hence making the problem over resolved increases the number of mesh levels contributing to an increased speed of convergence. If the steady state equations are elliptic, all grid levels contribute to the convergence regardless of the mesh density.
Optimal design of permeable fiber network structures for fog harvesting.
Park, Kyoo-Chul; Chhatre, Shreerang S; Srinivasan, Siddarth; Cohen, Robert E; McKinley, Gareth H
2013-10-29
Fog represents a large untapped source of potable water, especially in arid climates. Numerous plants and animals use textural and chemical features on their surfaces to harvest this precious resource. In this work, we investigate the influence of the surface wettability characteristics, length scale, and weave density on the fog-harvesting capability of woven meshes. We develop a combined hydrodynamic and surface wettability model to predict the overall fog-collection efficiency of the meshes and cast the findings in the form of a design chart. Two limiting surface wettability constraints govern the re-entrainment of collected droplets and clogging of mesh openings. Appropriate tuning of the wetting characteristics of the surfaces, reducing the wire radii, and optimizing the wire spacing all lead to more efficient fog collection. We use a family of coated meshes with a directed stream of fog droplets to simulate a natural foggy environment and demonstrate a five-fold enhancement in the fog-collecting efficiency of a conventional polyolefin mesh. The design rules developed in this work can be applied to select a mesh surface with optimal topography and wetting characteristics to harvest enhanced water fluxes over a wide range of natural convected fog environments.
NASA Astrophysics Data System (ADS)
Foo, Kam Keong
A two-dimensional dual-mode scramjet flowpath is developed and evaluated using the ANSYS Fluent density-based flow solver with various computational grids. Results are obtained for fuel-off, fuel-on non-reacting, and fuel-on reacting cases at different equivalence ratios. A one-step global chemical kinetics hydrogen-air model is used in conjunction with the eddy-dissipation model. Coarse, medium and fine computational grids are used to evaluate grid sensitivity and to investigate a lack of grid independence. Different grid adaptation strategies are performed on the coarse grid in an attempt to emulate the solutions obtained from the finer grids. The goal of this study is to investigate the feasibility of using various mesh adaptation criteria to significantly decrease computational efforts for high-speed reacting flows.
Mesh quality control for multiply-refined tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1994-01-01
A new algorithm for controlling the quality of multiply-refined tetrahedral meshes is presented in this paper. The basic dynamic mesh adaption procedure allows localized grid refinement and coarsening to efficiently capture aerodynamic flow features in computational fluid dynamics problems; however, repeated application of the procedure may significantly deteriorate the quality of the mesh. Results presented show the effectiveness of this mesh quality algorithm and its potential in the area of helicopter aerodynamics and acoustics.
Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1997-01-01
An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
NASA Astrophysics Data System (ADS)
Bremer, Magnus; Schmidtner, Korbinian; Rutzinger, Martin
2015-04-01
The architecture of forest canopies is a key parameter for forest ecological issues helping to model the variability of wood biomass and foliage in space and time. In order to understand the nature of subpixel effects of optical space-borne sensors with coarse spatial resolution, hypothetical 3D canopy models are widely used for the simulation of radiative transfer in forests. Thereby, radiation is traced through the atmosphere and canopy geometries until it reaches the optical sensor. For a realistic simulation scene we decompose terrestrial laser scanning point cloud data of leaf-off larch forest plots in the Austrian Alps and reconstruct detailed model ready input data for radiative transfer simulations. The point clouds are pre-classified into primitive classes using Principle Component Analysis (PCA) using scale adapted radius neighbourhoods. Elongated point structures are extracted as tree trunks. The tree trunks are used as seeds for a Dijkstra-growing procedure, in order to obtain single tree segmentation in the interlinked canopies. For the optimized reconstruction of branching architectures as vector models, point cloud skeletonisation is used in combination with an iterative Dijkstra-growing and by applying distance constraints. This allows conducting a hierarchical reconstruction preferring the tree trunk and higher order branches and avoiding over-skeletonization effects. Based on the reconstructed branching architectures, larch needles are modelled based on the hierarchical level of branches and the geometrical openness of the canopy. For radiative transfer simulations, branch architectures are used as mesh geometries representing branches as cylindrical pipes. Needles are either used as meshes or as voxel-turbids. The presented workflow allows an automatic classification and single tree segmentation in interlinked canopies. The iterative Dijkstra-growing using distance constraints generated realistic reconstruction results. As the mesh representation of branches proved to be sufficient for the simulation approach, the modelling of huge amounts of needles is much more efficient in voxel-turbid representation.
A moving mesh finite difference method for equilibrium radiation diffusion equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaobo, E-mail: xwindyb@126.com; Huang, Weizhang, E-mail: whuang@ku.edu; Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn
2015-10-01
An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativitymore » of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.« less
Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility
Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.
2009-01-01
We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pautz, Shawn D.; Bailey, Teresa S.
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Pautz, Shawn D.; Bailey, Teresa S.
2016-11-29
Here, the efficiency of discrete ordinates transport sweeps depends on the scheduling algorithm, the domain decomposition, the problem to be solved, and the computational platform. Sweep scheduling algorithms may be categorized by their approach to several issues. In this paper we examine the strategy of domain overloading for mesh partitioning as one of the components of such algorithms. In particular, we extend the domain overloading strategy, previously defined and analyzed for structured meshes, to the general case of unstructured meshes. We also present computational results for both the structured and unstructured domain overloading cases. We find that an appropriate amountmore » of domain overloading can greatly improve the efficiency of parallel sweeps for both structured and unstructured partitionings of the test problems examined on up to 10 5 processor cores.« less
Optical properties of aerosols at Grand Canyon National Park
NASA Astrophysics Data System (ADS)
Malm, William C.; Day, Derek E.
Visibility in the United States is expected to improve over the next few decades because of reduced emissions, especially sulfur dioxide. In the eastern United States, sulfates make up about 60-70% of aerosol extinction, while in the inner mountain west that fraction is only about 30%. In the inner mountain west, carbon aerosols make up about 35% of extinction, while coarse mass contributes between 15 and 25% depending on how absorption is estimated. Although sulfur dioxide emissions are projected to decrease, carbon emissions due to prescribed fire activity will increase by factors of 5-10, and while optical properties of sulfates have been extensively studied, similar properties of carbon and coarse particles are less well understood. The inability to conclusively apportion about 50% of the extinction budget motivated a study to examine aerosol physio-chemical-optical properties at Grand Canyon, Arizona during the months of July and August. Coarse particle mass has usually been assumed to consist primarily of wind-blown dust, with a mass-scattering efficiency between about 0.4 and 0.6 m 2 g -1. Although there were episodes where crustal material made up most of the coarse mass, on the average, organics and crustal material mass were about equal. Furthermore, about one-half of the sampling periods had coarse-mass-scattering efficiencies greater than 0.6 m 2 g -1 and at times coarse-mass-scattering efficiencies were near 1.0 m 2 g -1. It was shown that absorption by coarse- and fine-particle absorption were about equal and that both fine organic and sulfate mass-scattering efficiencies were substantially less than the nominal values of 4.0 and 3.0 m 2 g -1 that have typically been used.
NASA Astrophysics Data System (ADS)
Poirier, Vincent
Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.; ...
2016-09-22
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalchev, Delyan Z.; Lee, C. S.; Villa, U.
Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less
Time-marching multi-grid seismic tomography
NASA Astrophysics Data System (ADS)
Tong, P.; Yang, D.; Liu, Q.
2016-12-01
From the classic ray-based traveltime tomography to the state-of-the-art full waveform inversion, because of the nonlinearity of seismic inverse problems, a good starting model is essential for preventing the convergence of the objective function toward local minima. With a focus on building high-accuracy starting models, we propose the so-called time-marching multi-grid seismic tomography method in this study. The new seismic tomography scheme consists of a temporal time-marching approach and a spatial multi-grid strategy. We first divide the recording period of seismic data into a series of time windows. Sequentially, the subsurface properties in each time window are iteratively updated starting from the final model of the previous time window. There are at least two advantages of the time-marching approach: (1) the information included in the seismic data of previous time windows has been explored to build the starting models of later time windows; (2) seismic data of later time windows could provide extra information to refine the subsurface images. Within each time window, we use a multi-grid method to decompose the scale of the inverse problem. Specifically, the unknowns of the inverse problem are sampled on a coarse mesh to capture the macro-scale structure of the subsurface at the beginning. Because of the low dimensionality, it is much easier to reach the global minimum on a coarse mesh. After that, finer meshes are introduced to recover the micro-scale properties. That is to say, the subsurface model is iteratively updated on multi-grid in every time window. We expect that high-accuracy starting models should be generated for the second and later time windows. We will test this time-marching multi-grid method by using our newly developed eikonal-based traveltime tomography software package tomoQuake. Real application results in the 2016 Kumamoto earthquake (Mw 7.0) region in Japan will be demonstrated.
Isotropic stochastic rotation dynamics
NASA Astrophysics Data System (ADS)
Mühlbauer, Sebastian; Strobl, Severin; Pöschel, Thorsten
2017-12-01
Stochastic rotation dynamics (SRD) is a widely used method for the mesoscopic modeling of complex fluids, such as colloidal suspensions or multiphase flows. In this method, however, the underlying Cartesian grid defining the coarse-grained interaction volumes induces anisotropy. We propose an isotropic, lattice-free variant of stochastic rotation dynamics, termed iSRD. Instead of Cartesian grid cells, we employ randomly distributed spherical interaction volumes. This eliminates the requirement of a grid shift, which is essential in standard SRD to maintain Galilean invariance. We derive analytical expressions for the viscosity and the diffusion coefficient in relation to the model parameters, which show excellent agreement with the results obtained in iSRD simulations. The proposed algorithm is particularly suitable to model systems bound by walls of complex shape, where the domain cannot be meshed uniformly. The presented approach is not limited to SRD but is applicable to any other mesoscopic method, where particles interact within certain coarse-grained volumes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...
2017-01-01
Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greve, L., E-mail: lars.greve@volkswagen.de; Medricky, M., E-mail: miloslav.medricky@volkswagen.de; Andres, M., E-mail: miloslav.medricky@volkswagen.de
A comprehensive strain hardening and fracture characterization of different grades of boron steel blanks has been performed, providing the foundation for the implementation into the modular material model (MMM) framework developed by Volkswagen Group Research for an explicit crash code. Due to the introduction of hardness-based interpolation rules for the characterized main grades, the hardening and fracture behavior is solely described by the underlying Vickers hardness. In other words, knowledge of the hardness distribution within a hot-formed component is enough to set up the newly developed computational model. The hardness distribution can be easily introduced via an experimentally measured hardnessmore » curve or via hardness mapping from a corresponding hot-forming simulation. For industrial application using rather coarse and computationally inexpensive shell element meshes, the user material model has been extended by a necking/post-necking model with reduced mesh-dependency as an additional failure mode. The present paper mainly addresses the necking/post-necking model.« less
NASA Astrophysics Data System (ADS)
Chun, Sehun
2017-07-01
Applying the method of moving frames to Maxwell's equations yields two important advancements for scientific computing. The first is the use of upwind flux for anisotropic materials in Maxwell's equations, especially in the context of discontinuous Galerkin (DG) methods. Upwind flux has been available only to isotropic material, because of the difficulty of satisfying the Rankine-Hugoniot conditions in anisotropic media. The second is to solve numerically Maxwell's equations on curved surfaces without the metric tensor and composite meshes. For numerical validation, spectral convergences are displayed for both two-dimensional anisotropic media and isotropic spheres. In the first application, invisible two-dimensional metamaterial cloaks are simulated with a relatively coarse mesh by both the lossless Drude model and the piecewisely-parametered layered model. In the second application, extremely low frequency propagation on various surfaces such as spheres, irregular surfaces, and non-convex surfaces is demonstrated.
Robust diamond meshes with unique wettability properties.
Yang, Yizhou; Li, Hongdong; Cheng, Shaoheng; Zou, Guangtian; Wang, Chuanxi; Lin, Quan
2014-03-18
Robust diamond meshes with excellent superhydrophobic and superoleophilic properties have been fabricated. Superhydrophobicity is observed for water with varying pH from 1 to 14 with good recyclability. Reversible superhydrophobicity and hydrophilicity can be easily controlled. The diamond meshes show highly efficient water-oil separation and water pH droplet transference.
NASA Astrophysics Data System (ADS)
Zaini, H.; Abubakar, S.; Rihayat, T.; Suryani, S.
2018-03-01
Removal of heavy metal content in wastewater has been largely done by various methods. One effective and efficient method is the adsorption method. This study aims to reduce manganese (II) content in wastewater based on column adsorption method using absorbent material from bagasse. The fixed variable consisted of 50 g adsorbent, 10 liter adsorbate volume, flow rate of 7 liters / min. Independent variable of particle size with variation 10 – 30 mesh and contact time with variation 0 - 240 min and respon variable concentration of adsorbate (ppm), pH and conductivity. The results showed that the adsorption process of manganese metal is influenced by particle size and contact time. The adsorption kinetics takes place according to pseudo-second order kinetics with an equilibrium adsorption capacity (qe: mg / g) for 10 mesh adsorbent particles: 0.8947; 20 mesh adsorbent particles: 0.4332 and 30 mesh adsorbent particles: 1.0161, respectively. Highest removal efficience for 10 mesh adsorbent particles: 49.22% on contact time 60 min; 20 mesh adsorbent particles: 35,25% on contact time 180 min and particle 30 mesh adsorbent particles: 51,95% on contact time 150 min.
Relative entropy and optimization-driven coarse-graining methods in VOTCA
Mashayak, S. Y.; Jochum, Mara N.; Koschke, Konstantin; ...
2015-07-20
We discuss recent advances of the VOTCA package for systematic coarse-graining. Two methods have been implemented, namely the downhill simplex optimization and the relative entropy minimization. We illustrate the new methods by coarse-graining SPC/E bulk water and more complex water-methanol mixture systems. The CG potentials obtained from both methods are then evaluated by comparing the pair distributions from the coarse-grained to the reference atomistic simulations.We have also added a parallel analysis framework to improve the computational efficiency of the coarse-graining process.
Adaptive radial basis function mesh deformation using data reduction
NASA Astrophysics Data System (ADS)
Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.
2016-09-01
Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.
Evaluation of an improved finite-element thermal stress calculation technique
NASA Technical Reports Server (NTRS)
Camarda, C. J.
1982-01-01
A procedure for generating accurate thermal stresses with coarse finite element grids (Ojalvo's method) is described. The procedure is based on the observation that for linear thermoelastic problems, the thermal stresses may be envisioned as being composed of two contributions; the first due to the strains in the structure which depend on the integral of the temperature distribution over the finite element and the second due to the local variation of the temperature in the element. The first contribution can be accurately predicted with a coarse finite-element mesh. The resulting strain distribution can then be combined via the constitutive relations with detailed temperatures from a separate thermal analysis. The result is accurate thermal stresses from coarse finite element structural models even where the temperature distributions have sharp variations. The range of applicability of the method for various classes of thermostructural problems such as in-plane or bending type problems and the effect of the nature of the temperature distribution and edge constraints are addressed. Ojalvo's method is used in conjunction with the SPAR finite element program. Results are obtained for rods, membranes, a box beam and a stiffened panel.
NASA Technical Reports Server (NTRS)
Crook, Andrew J.; Delaney, Robert A.
1992-01-01
The purpose of this study is the development of a three-dimensional Euler/Navier-Stokes flow analysis for fan section/engine geometries containing multiple blade rows and multiple spanwise flow splitters. An existing procedure developed by Dr. J. J. Adamczyk and associates and the NASA Lewis Research Center was modified to accept multiple spanwise splitter geometries and simulate engine core conditions. The procedure was also modified to allow coarse parallelization of the solution algorithm. This document is a final report outlining the development and techniques used in the procedure. The numerical solution is based upon a finite volume technique with a four stage Runge-Kutta time marching procedure. Numerical dissipation is used to gain solution stability but is reduced in viscous dominated flow regions. Local time stepping and implicit residual smoothing are used to increase the rate of convergence. Multiple blade row solutions are based upon the average-passage system of equations. The numerical solutions are performed on an H-type grid system, with meshes being generated by the system (TIGG3D) developed earlier under this contract. The grid generation scheme meets the average-passage requirement of maintaining a common axisymmetric mesh for each blade row grid. The analysis was run on several geometry configurations ranging from one to five blade rows and from one to four radial flow splitters. Pure internal flow solutions were obtained as well as solutions with flow about the cowl/nacelle and various engine core flow conditions. The efficiency of the solution procedure was shown to be the same as the original analysis.
A two-level stochastic collocation method for semilinear elliptic equations with random coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Luoping; Zheng, Bin; Lin, Guang
In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Canhai; Xu, Zhijie; Li, Tingwen
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber’s performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable simulations andmore » manageable computational effort. Previously developed two filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical) on the adsorber’s hydrodynamics and CO2 capture performance are then examined. The simulation result subsequently is compared and contrasted with another predicted by a one-dimensional three-region process model.« less
Multigrid Strategies for Viscous Flow Solvers on Anisotropic Unstructured Meshes
NASA Technical Reports Server (NTRS)
Movriplis, Dimitri J.
1998-01-01
Unstructured multigrid techniques for relieving the stiffness associated with high-Reynolds number viscous flow simulations on extremely stretched grids are investigated. One approach consists of employing a semi-coarsening or directional-coarsening technique, based on the directions of strong coupling within the mesh, in order to construct more optimal coarse grid levels. An alternate approach is developed which employs directional implicit smoothing with regular fully coarsened multigrid levels. The directional implicit smoothing is obtained by constructing implicit lines in the unstructured mesh based on the directions of strong coupling. Both approaches yield large increases in convergence rates over the traditional explicit full-coarsening multigrid algorithm. However, maximum benefits are achieved by combining the two approaches in a coupled manner into a single algorithm. An order of magnitude increase in convergence rate over the traditional explicit full-coarsening algorithm is demonstrated, and convergence rates for high-Reynolds number viscous flows which are independent of the grid aspect ratio are obtained. Further acceleration is provided by incorporating low-Mach-number preconditioning techniques, and a Newton-GMRES strategy which employs the multigrid scheme as a preconditioner. The compounding effects of these various techniques on speed of convergence is documented through several example test cases.
A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method
NASA Astrophysics Data System (ADS)
Fu, Shubin; Gao, Kai
2017-11-01
Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.
A comparative study of an ABC and an artificial absorber for truncating finite element meshes
NASA Technical Reports Server (NTRS)
Oezdemir, T.; Volakis, John L.
1993-01-01
The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.
Carpet: Adaptive Mesh Refinement for the Cactus Framework
NASA Astrophysics Data System (ADS)
Schnetter, Erik; Hawley, Scott; Hawke, Ian
2016-11-01
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
Laboratory hydraulic calibration of the Helley-Smith bedload sediment sampler
Druffel, Leroy; Emmett, W.W.; Schneider, V.R.; Skinner, J.V.
1976-01-01
Filling the sample bag to 40 percent capacity with a sediment larger in diameter than the mesh size of the bag had no effect on the hydraulic efficiency. Particles close to the 0.2 mm mesh size of the sample bag plugged the openings and caused the efficiency to decrease in an undetermined manner.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
Gondal, Mohammed A; Sadullah, Muhammad S; Dastageer, Mohamed A; McKinley, Gareth H; Panchanathan, Divya; Varanasi, Kripa K
2014-08-27
Surfaces which possess extraordinary water attraction or repellency depend on surface energy, surface chemistry, and nano- and microscale surface roughness. Synergistic superhydrophilic-underwater superoleophobic surfaces were fabricated by spray deposition of nanostructured TiO2 on stainless steel mesh substrates. The coated meshes were then used to study gravity driven oil-water separation, where only the water from the oil-water mixture is allowed to permeate through the mesh. Oil-water separation efficiencies of up to 99% could be achieved through the coated mesh of pore sizes 50 and 100 μm, compared to no separation at all, that was observed in the case of uncoated meshes of the same material and pore sizes. An adsorbed water on the TiO2 coated surface, formation of a water-film between the wires that form the mesh and the underwater superoleophobicity of the structured surface are the key factors that contribute to the enhanced efficiency observed in oil-water separation. The nature of the oil-water separation process using this coated mesh (in which the mesh allows water to pass through the porous structure but resists wetting by the oil phase) minimizes the fouling of mesh so that the need for frequent replacement of the separating medium is reduced. The fabrication approach presented here can be applied for coating large surface areas and to develop a large-scale oil-water separation facility for oil-field applications and petroleum industries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rahnema, Farzad; Garimeela, Srinivas; Ougouag, Abderrafi
2013-11-29
This project will develop a 3D, advanced coarse mesh transport method (COMET-Hex) for steady- state and transient analyses in advanced very high-temperature reactors (VHTRs). The project will lead to a coupled neutronics and thermal hydraulic (T/H) core simulation tool with fuel depletion capability. The computational tool will be developed in hexagonal geometry, based solely on transport theory without (spatial) homogenization in complicated 3D geometries. In addition to the hexagonal geometry extension, collaborators will concurrently develop three additional capabilities to increase the code’s versatility as an advanced and robust core simulator for VHTRs. First, the project team will develop and implementmore » a depletion method within the core simulator. Second, the team will develop an elementary (proof-of-concept) 1D time-dependent transport method for efficient transient analyses. The third capability will be a thermal hydraulic method coupled to the neutronics transport module for VHTRs. Current advancements in reactor core design are pushing VHTRs toward greater core and fuel heterogeneity to pursue higher burn-ups, efficiently transmute used fuel, maximize energy production, and improve plant economics and safety. As a result, an accurate and efficient neutron transport, with capabilities to treat heterogeneous burnable poison effects, is highly desirable for predicting VHTR neutronics performance. This research project’s primary objective is to advance the state of the art for reactor analysis.« less
Efficiently Sorting Zoo-Mesh Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, R; Max, N; Silva, C
The authors describe the SXMPVO algorithm for performing a visibility ordering zoo-meshed polyhedra. The algorithm runs in practice in linear time and the visibility ordering which it produces is exact.
An object-oriented approach for parallel self adaptive mesh refinement on block structured grids
NASA Technical Reports Server (NTRS)
Lemke, Max; Witsch, Kristian; Quinlan, Daniel
1993-01-01
Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime
2017-01-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022
Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J
2018-04-01
Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.
Application of a multi-level grid method to transonic flow calculations
NASA Technical Reports Server (NTRS)
South, J. C., Jr.; Brandt, A.
1976-01-01
A multi-level grid method was studied as a possible means of accelerating convergence in relaxation calculations for transonic flows. The method employs a hierarchy of grids, ranging from very coarse to fine. The coarser grids are used to diminish the magnitude of the smooth part of the residuals. The method was applied to the solution of the transonic small disturbance equation for the velocity potential in conservation form. Nonlifting transonic flow past a parabolic arc airfoil is studied with meshes of both constant and variable step size.
Center for Efficient Exascale Discretizations Software Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolev, Tzanio; Dobrev, Veselin; Tomov, Vladimir
The CEED Software suite is a collection of generally applicable software tools focusing on the following computational motives: PDE discretizations on unstructured meshes, high-order finite element and spectral element methods and unstructured adaptive mesh refinement. All of this software is being developed as part of CEED, a co-design Center for Efficient Exascale Discretizations, within DOE's Exascale Computing Project (ECP) program.
Laser additive manufacturing of 3D meshes for optical applications.
Essa, Khamis; Sabouri, Aydin; Butt, Haider; Basuny, Fawzia Hamed; Ghazy, Mootaz; El-Sayed, Mahmoud Ahmed
2018-01-01
Selective laser melting (SLM) is a widely used additive manufacturing process that can be used for printing of intricate three dimensional (3D) metallic structures. Here we demonstrate the fabrication of titanium alloy Ti-6Al-4V alloy based 3D meshes with nodally-connected diamond like unit cells, with lattice spacing varying from 400 to 1000 microns. A Concept Laser M2 system equipped with laser that has a wavelength of 1075 nm, a constant beam spot size of 50μm and maximum power of 400W was used to manufacture the 3D meshes. These meshes act as optical shutters / directional transmitters and display interesting optical properties. A detailed optical characterisation was carried out and it was found that these structures can be optimised to act as scalable rotational shutters with high efficiencies and as angle selective transmission screens for protection against unwanted and dangerous radiations. The efficiency of fabricated lattice structures can be increased by enlarging the meshing size.
Laser additive manufacturing of 3D meshes for optical applications
Essa, Khamis; Sabouri, Aydin; Butt, Haider; Basuny, Fawzia Hamed; Ghazy, Mootaz
2018-01-01
Selective laser melting (SLM) is a widely used additive manufacturing process that can be used for printing of intricate three dimensional (3D) metallic structures. Here we demonstrate the fabrication of titanium alloy Ti–6Al–4V alloy based 3D meshes with nodally-connected diamond like unit cells, with lattice spacing varying from 400 to 1000 microns. A Concept Laser M2 system equipped with laser that has a wavelength of 1075 nm, a constant beam spot size of 50μm and maximum power of 400W was used to manufacture the 3D meshes. These meshes act as optical shutters / directional transmitters and display interesting optical properties. A detailed optical characterisation was carried out and it was found that these structures can be optimised to act as scalable rotational shutters with high efficiencies and as angle selective transmission screens for protection against unwanted and dangerous radiations. The efficiency of fabricated lattice structures can be increased by enlarging the meshing size. PMID:29414982
Shelter effect efficacy of sand fences: A comparison of systems in a wind tunnel
NASA Astrophysics Data System (ADS)
Wang, Tao; Qu, Jianjun; Ling, Yuquan; Liu, Benli; Xiao, Jianhua
2018-02-01
The Lanzhou-Xinjiang High-speed Railway runs through an expansive wind area in the Gobi Desert and blown-sand disasters are a critical issue affecting its operation. To strengthen the blown-sand disaster shelter systems along the railway, the shelter effects of punching plate and wire mesh fences with approximately equal porosity (48%) were simulated in a wind tunnel. The experimental results showed that the wind velocity was reduced to a higher extent by the punching plate fence than by the wire mesh fence. When a single row of sand fencing was used, the wind velocity reduction coefficient (Rcz) values downwind of the punching plate fence and wire mesh fence reached 71.77% and 39.37%, respectively. When double rows of sand fencing were used, the Rcz values downwind of the punching plate and wire mesh fences were approximately 87.48% and 60.81%, respectively. For the flow field structure on the leeward side of the fencing, the deceleration zone behind the punching plate fence was more pronounced than that behind the wire mesh fence. The vortex zone was not obvious and the reverse flow disappeared for both types of fences, which indicates that the turbulent intensity was small. The sand-trapping efficiency of the wire mesh fence was close to that of punching plate fence. When a single row of sand fencing was set up, the total mass flux density decreased, on average, by 65.85% downwind of the wire mesh fence, and 75.06% downwind of the punching plate fence; when double rows of sand fencing were present, the total mass flux density decreased, on average, by 84.53% downwind of the wire mesh fence and 84.51% downwind of the punching plate fence. In addition, the wind-proof efficiency and the sand-proof efficiency of the punching plate fence and the wire mesh fence decreased with increasing wind velocities. Consequently, punching plate and wire mesh fences may effectively control the sand hazard in the expansive wind area of the Gobi Desert.
Gruber-Blum, S; Brand, J; Keibl, C; Redl, H; Fortelny, R H; May, C; Petter-Puchner, A H
2015-08-01
Fibrin sealant (FS) is a safe and efficient fixation method in open intraperitoneal hernia repair. While favourable results have been achieved with hydrophilic meshes, hydrophobic (such as Omega fatty acid coated) meshes (OFM) have not been specifically assessed so far. Atrium C-qur lite(®) mesh was tested in rats in models of open onlay and intraperitoneal hernia repair. 44 meshes (2 × 2 cm) were implanted in 30 male Sprague-Dawley rats in open (n = 2 meshes per animal) and intraperitoneal technique (IPOM; n = 1 mesh per animal). Animals were randomised to four groups: onlay and IPOM sutured vs. sealed. Follow-up was 6 weeks, sutured groups serving as controls. Evaluation criteria were mesh dislocation, adhesions and foreign body reaction. FS provided a reliable fixation in onlay technique, whereas OFM meshes dislocated in the IPOM position when sealed only. FS mesh fixation was safe with OFM meshes in open onlay repair. Intraperitoneal placement of hydrophobic meshes requires additional fixation and cannot be achieved with FS alone.
Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak
1997-01-01
Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.
An Agent Based Collaborative Simplification of 3D Mesh Model
NASA Astrophysics Data System (ADS)
Wang, Li-Rong; Yu, Bo; Hagiwara, Ichiro
Large-volume mesh model faces the challenge in fast rendering and transmission by Internet. The current mesh models obtained by using three-dimensional (3D) scanning technology are usually very large in data volume. This paper develops a mobile agent based collaborative environment on the development platform of mobile-C. Communication among distributed agents includes grasping image of visualized mesh model, annotation to grasped image and instant message. Remote and collaborative simplification can be efficiently conducted by Internet.
Drag Prediction for the NASA CRM Wing-Body-Tail Using CFL3D and OVERFLOW on an Overset Mesh
NASA Technical Reports Server (NTRS)
Sclafani, Anthony J.; DeHaan, Mark A.; Vassberg, John C.; Rumsey, Christopher L.; Pulliam, Thomas H.
2010-01-01
In response to the fourth AIAA CFD Drag Prediction Workshop (DPW-IV), the NASA Common Research Model (CRM) wing-body and wing-body-tail configurations are analyzed using the Reynolds-averaged Navier-Stokes (RANS) flow solvers CFL3D and OVERFLOW. Two families of structured, overset grids are built for DPW-IV. Grid Family 1 (GF1) consists of a coarse (7.2 million), medium (16.9 million), fine (56.5 million), and extra-fine (189.4 million) mesh. Grid Family 2 (GF2) is an extension of the first and includes a superfine (714.2 million) and an ultra-fine (2.4 billion) mesh. The medium grid anchors both families with an established build process for accurate cruise drag prediction studies. This base mesh is coarsened and enhanced to form a set of parametrically equivalent grids that increase in size by a factor of roughly 3.4 from one level to the next denser level. Both CFL3D and OVERFLOW are run on GF1 using a consistent numerical approach. Additional OVERFLOW runs are made to study effects of differencing scheme and turbulence model on GF1 and to obtain results for GF2. All CFD results are post-processed using Richardson extrapolation, and approximate grid-converged values of drag are compared. The medium grid is also used to compute a trimmed drag polar for both codes.
Streaming simplification of tetrahedral meshes.
Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T
2007-01-01
Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.
eBits: Compact stream of mesh refinements for remote visualization
Sati, Mukul; Lindstrom, Peter; Rossignac, Jarek
2016-05-12
Here, we focus on applications where a remote client needs to visualize or process a complex, manifold triangle mesh, M, but only in a relatively small, user controlled, Region of Interest (RoI) at a time. The client first downloads a coarse base mesh, pre-computed on the server via a series of simplification passes on M, one per Level of Detail (LoD), each pass identifying an independent set of triangles, collapsing them, and, for each collapse, storing, in a Vertex Expansion Record (VER), the information needed to reverse the collapse. On each client initiated RoI modification request, the server pushes tomore » the client a selected subset of these VERs, which, when decoded and applied to refine the mesh locally, ensure that the portion in the RoI is always at full resolution. The eBits approach proposed here offers state of the art compression ratios (using less than 2.5 bits per new full resolution RoI triangle when the RoI has more than 2000 vertices to transmit the connectivity for the selective refinements) and fine-grain control (allowing the user to adjust the RoI by small increments). The effectiveness of eBits results from several novel ideas and novel variations of previous solutions. We represent the VERs using persistent labels so that they can be applied in different orders within a given LoD. The server maintains a shadow copy of the client’s mesh. To avoid sending IDs identifying which vertices should be expanded, we either transmit, for each new vertex, a compact encoding of its death tag–the LoD at which it will be expanded if it lies in the Rol–or transmit vertex masks for the RoI and its neighboring vertices. We also propose a three-step simplification that reduces the overall transmission cost by increasing both the simplification effectiveness and the regularity of the valences in the resulting meshes.« less
eBits: Compact stream of mesh refinements for remote visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sati, Mukul; Lindstrom, Peter; Rossignac, Jarek
2016-05-12
Here, we focus on applications where a remote client needs to visualize or process a complex, manifold triangle mesh, M, but only in a relatively small, user controlled, Region of Interest (RoI) at a time. The client first downloads a coarse base mesh, pre-computed on the server via a series of simplification passes on M, one per Level of Detail (LoD), each pass identifying an independent set of triangles, collapsing them, and, for each collapse, storing, in a Vertex Expansion Record (VER), the information needed to reverse the collapse. On each client initiated RoI modification request, the server pushes tomore » the client a selected subset of these VERs, which, when decoded and applied to refine the mesh locally, ensure that the portion in the RoI is always at full resolution. The eBits approach proposed here offers state of the art compression ratios (using less than 2.5 bits per new full resolution RoI triangle when the RoI has more than 2000 vertices to transmit the connectivity for the selective refinements) and fine-grain control (allowing the user to adjust the RoI by small increments). The effectiveness of eBits results from several novel ideas and novel variations of previous solutions. We represent the VERs using persistent labels so that they can be applied in different orders within a given LoD. The server maintains a shadow copy of the client’s mesh. To avoid sending IDs identifying which vertices should be expanded, we either transmit, for each new vertex, a compact encoding of its death tag –the LoD at which it will be expanded if it lies in the RoI–or transmit vertex masks for the RoI and its neighboring vertices. We also propose a three-step simplification that reduces the overall transmission cost by increasing both the simplification effectiveness and the regularity of the valences in the resulting meshes.« less
Simulation of the Francis-99 Hydro Turbine During Steady and Transient Operation
NASA Astrophysics Data System (ADS)
Dewan, Yuvraj; Custer, Chad; Ivashchenko, Artem
2017-01-01
Numerical simulation of the Francis-99 hydroturbine with correlation to experimental measurements are presented. Steady operation of the hydroturbine is analyzed at three operating conditions: the best efficiency point (BEP), high load (HL), and part load (PL). It is shown that global quantities such as net head, discharge and efficiency are well predicted. Additionally, time-averaged velocity predictions compare well with PIV measurements obtained in the draft tube immediately downstream of the runner. Differences in vortex rope structure between operating points are discussed. Unsteady operation of the hydroturbine from BEP to HL and from BEP to PL are modeled. It is shown that simulation methods used to model the steady operation produce predictions that correlate well with experiment for transient operation. Time-domain unsteady simulation is used for both steady and unsteady operation. The full-fidelity geometry including all components is meshed using an unstructured polyhedral mesh with body-fitted prism layers. Guide vane rotation for transient operation is imposed using fully-conservative, computationally efficient mesh morphing. The commercial solver STAR-CCM+ is used for all portions of the analysis including meshing, solving and post-processing.
New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data
Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko
2013-01-01
In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
Dossa, Gbadamassi G. O.; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D.
2016-01-01
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11–1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition. PMID:27698461
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, Nathan V.; Demkowiz, Leszek; Moser, Robert
2015-11-15
The discontinuous Petrov-Galerkin methodology with optimal test functions (DPG) of Demkowicz and Gopalakrishnan [18, 20] guarantees the optimality of the solution in an energy norm, and provides several features facilitating adaptive schemes. Whereas Bubnov-Galerkin methods use identical trial and test spaces, Petrov-Galerkin methods allow these function spaces to differ. In DPG, test functions are computed on the fly and are chosen to realize the supremum in the inf-sup condition; the method is equivalent to a minimum residual method. For well-posed problems with sufficiently regular solutions, DPG can be shown to converge at optimal rates—the inf-sup constants governing the convergence aremore » mesh-independent, and of the same order as those governing the continuous problem [48]. DPG also provides an accurate mechanism for measuring the error, and this can be used to drive adaptive mesh refinements. We employ DPG to solve the steady incompressible Navier-Stokes equations in two dimensions, building on previous work on the Stokes equations, and focusing particularly on the usefulness of the approach for automatic adaptivity starting from a coarse mesh. We apply our approach to a manufactured solution due to Kovasznay as well as the lid-driven cavity flow, backward-facing step, and flow past a cylinder problems.« less
Svyatsky, Daniil; Lipnikov, Konstantin
2017-03-18
Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less
NASA Technical Reports Server (NTRS)
Thompson, D.; Mogili, P.; Chalasani, S.; Addy, H.; Choo, Y.
2004-01-01
Steady-state solutions of the Reynolds-averaged Navier-Stokes (RANS) equations were computed using the Colbalt flow solver for a constant-section, rectangular wing based on an extruded two-dimensional glaze ice shape. The one equation Spalart-Allmaras turbulence model was used. The results were compared with data obtained from a recent wind tunnel test. Computed results indicate that the steady RANS solutions do not accurately capture the recirculating region downstream of the ice accretion, even after a mesh refinement. The resulting predicted reattachment is farther downstream than indicated by the experimental data. Additionally, the solutions computed on a relatively coarse baseline mesh had detailed flow characteristics that were different from those computed on the refined mesh or the experimental data. Steady RANS solutions were also computed to investigate the effects of spanwise variation in the ice shape. The spanwise variation was obtained via a bleeding function that merged the ice shape with the clean wing using a sinusoidal spanwise variation. For these configurations, the results predicted for the extruded shape provided conservative estimates for the performance degradation of the wing. Additionally, the spanwise variation in the ice shape and the resulting differences in the flow fields did not significantly change the location of the primary reattachment.
Flury, Sabine; Gessner, Mark O
2011-02-01
Atmospheric warming and increased nitrogen deposition can lead to changes of microbial communities with possible consequences for biogeochemical processes. We used an enclosure facility in a freshwater marsh to assess the effects on microbes associated with decomposing plant litter under conditions of simulated climate warming and pulsed nitrogen supply. Standard batches of litter were placed in coarse-mesh and fine-mesh bags and submerged in a series of heated, nitrogen-enriched, and control enclosures. They were retrieved later and analyzed for a range of microbial parameters. Fingerprinting profiles obtained by denaturing gradient gel electrophoresis (DGGE) indicated that simulated global warming induced a shift in bacterial community structure. In addition, warming reduced fungal biomass, whereas bacterial biomass was unaffected. The mesh size of the litter bags and sampling date also had an influence on bacterial community structure, with the apparent number of dominant genotypes increasing from spring to summer. Microbial respiration was unaffected by any treatment, and nitrogen enrichment had no clear effect on any of the microbial parameters considered. Overall, these results suggest that microbes associated with decomposing plant litter in nutrient-rich freshwater marshes are resistant to extra nitrogen supplies but are likely to respond to temperature increases projected for this century.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Svyatsky, Daniil; Lipnikov, Konstantin
Richards’s equation describes steady-state or transient flow in a variably saturated medium. For a medium having multiple layers of soils that are not aligned with coordinate axes, a mesh fitted to these layers is no longer orthogonal and the classical two-point flux approximation finite volume scheme is no longer accurate. Here, we propose new second-order accurate nonlinear finite volume (NFV) schemes for the head and pressure formulations of Richards’ equation. We prove that the discrete maximum principles hold for both formulations at steady-state which mimics similar properties of the continuum solution. The second-order accuracy is achieved using high-order upwind algorithmsmore » for the relative permeability. Numerical simulations of water infiltration into a dry soil show significant advantage of the second-order NFV schemes over the first-order NFV schemes even on coarse meshes. Since explicit calculation of the Jacobian matrix becomes prohibitively expensive for high-order schemes due to build-in reconstruction and slope limiting algorithms, we study numerically the preconditioning strategy introduced recently in Lipnikov et al. (2016) that uses a stable approximation of the continuum Jacobian. Lastly, numerical simulations show that the new preconditioner reduces computational cost up to 2–3 times in comparison with the conventional preconditioners.« less
Recent developments in multidimensional transport methods for the APOLLO 2 lattice code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zmijarevic, I.; Sanchez, R.
1995-12-31
A usual method of preparation of homogenized cross sections for reactor coarse-mesh calculations is based on two-dimensional multigroup transport treatment of an assembly together with an appropriate leakage model and reaction-rate-preserving homogenization technique. The actual generation of assembly spectrum codes based on collision probability methods is capable of treating complex geometries (i.e., irregular meshes of arbitrary shape), thus avoiding the modeling error that was introduced in codes with traditional tracking routines. The power and architecture of current computers allow the treatment of spatial domains comprising several mutually interacting assemblies using fine multigroup structure and retaining all geometric details of interest.more » Increasing safety requirements demand detailed two- and three-dimensional calculations for very heterogeneous problems such as control rod positioning, broken Pyrex rods, irregular compacting of mixed- oxide (MOX) pellets at an MOX-UO{sub 2} interface, and many others. An effort has been made to include accurate multi- dimensional transport methods in the APOLLO 2 lattice code. These include extension to three-dimensional axially symmetric geometries of the general-geometry collision probability module TDT and the development of new two- and three-dimensional characteristics methods for regular Cartesian meshes. In this paper we discuss the main features of recently developed multidimensional methods that are currently being tested.« less
Dossa, Gbadamassi G O; Paudel, Ekananda; Cao, Kunfang; Schaefer, Douglas; Harrison, Rhett D
2016-10-04
Organic matter decomposition represents a vital ecosystem process by which nutrients are made available for plant uptake and is a major flux in the global carbon cycle. Previous studies have investigated decomposition of different plant parts, but few considered bark decomposition or its role in decomposition of wood. However, bark can comprise a large fraction of tree biomass. We used a common litter-bed approach to investigate factors affecting bark decomposition and its role in wood decomposition for five tree species in a secondary seasonal tropical rain forest in SW China. For bark, we implemented a litter bag experiment over 12 mo, using different mesh sizes to investigate effects of litter meso- and macro-fauna. For wood, we compared the decomposition of branches with and without bark over 24 mo. Bark in coarse mesh bags decomposed 1.11-1.76 times faster than bark in fine mesh bags. For wood decomposition, responses to bark removal were species dependent. Three species with slow wood decomposition rates showed significant negative effects of bark-removal, but there was no significant effect in the other two species. Future research should also separately examine bark and wood decomposition, and consider bark-removal experiments to better understand roles of bark in wood decomposition.
Accelerated life test of sputtering and anode deposit spalling in a small mercury ion thruster
NASA Technical Reports Server (NTRS)
Power, J. L.
1975-01-01
Tantalum and molybdenum sputtered from discharge chamber components during operation of a 5 centimeter diameter mercury ion thruster adhered much more strongly to coarsely grit blasted anode surfaces than to standard surfaces. Spalling of the sputtered coating did occur from a coarse screen anode surface but only in flakes less than a mesh unit long. The results were obtained in a 200 hour accelerated life test conducted at an elevated discharge potential of 64.6 volts. The test approximately reproduced the major sputter erosion and deposition effects that occur under normal operation but at approximately 75 times the normal rate. No discharge chamber component suffered sufficient erosion in the test to threaten its structural integrity or further serviceability. The test indicated that the use of tantalum-surfaced discharge chamber components in conjunction with a fine wire screen anode surface should cure the problems of sputter erosion and sputtered deposits spalling in long term operation of small mercury ion thrusters.
NASA Technical Reports Server (NTRS)
Delgado, Irebert R.; Hurrell, Michael
2017-01-01
Rotorcraft gearbox efficiencies are reduced at increased surface speeds due to viscous and impingement drag on the gear teeth. This windage power loss can affect overall mission range, payload, and frequency of transmission maintenance. Experimental and analytical studies on shrouding for single gears have shown it to be potentially effective in mitigating windage power loss. Efficiency studies on unshrouded meshed gears have shown the effect of speed, oil viscosity, temperature, load, lubrication scheme, etc. on gear windage power loss. The open literature does not contain experimental test data on shrouded meshed spur gears. Gear windage power loss test results are presented on shrouded meshed spur gears at elevated oil inlet temperatures and constant oil pressure both with and without shrouding. Shroud effectiveness is compared at four oil inlet temperatures. The results are compared to the available literature and follow-up work is outlined.
NASA Technical Reports Server (NTRS)
Delgado, Irebert R.; Hurrell, Michael James
2017-01-01
Rotorcraft gearbox efficiencies are reduced at increased surface speeds due to viscous and impingement drag on the gear teeth. This windage power loss can affect overall mission range, payload, and frequency of transmission maintenance. Experimental and analytical studies on shrouding for single gears have shown it be potentially effective in mitigating windage power loss. Efficiency studies on unshrouded meshed gears have shown the effect of speed, oil viscosity, temperature, load, lubrication scheme, etc. on gear windage power loss. The open literature does not cite data on shrouded meshed spur gears. Gear windage power loss test results are presented on shrouded meshed spur gears at elevated oil inlet temperatures and constant oil pressure both with and without shrouding. Shroud effectiveness is compared at four oil inlet temperatures. The results are compared to the available literature and follow-up work is outlined.
Laws, causation and dynamics at different levels.
Butterfield, Jeremy
2012-02-06
I have two main aims. The first is general, and more philosophical (§2). The second is specific, and more closely related to physics (§§3 and 4). The first aim is to state my general views about laws and causation at different 'levels'. The main task is to understand how the higher levels sustain notions of law and causation that 'ride free' of reductions to the lower level or levels. I endeavour to relate my views to those of other symposiasts. The second aim is to give a framework for describing dynamics at different levels, emphasizing how the various levels' dynamics can mesh or fail to mesh. This framework is essentially that of elementary dynamical systems theory. The main idea will be, for simplicity, to work with just two levels, dubbed 'micro' and 'macro', which are related by coarse-graining. I use this framework to describe, in part, the first four of Ellis' five types of top-down causation.
Simplified Models for the Study of Postbuckled Hat-Stiffened Composite Panels
NASA Technical Reports Server (NTRS)
Vescovini, Riccardo; Davila, Carlos G.; Bisagni, Chiara
2012-01-01
The postbuckling response and failure of multistringer stiffened panels is analyzed using models with three levels of approximation. The first model uses a relatively coarse mesh to capture the global postbuckling response of a five-stringer panel. The second model can predict the nonlinear response as well as the debonding and crippling failure mechanisms in a single stringer compression specimen (SSCS). The third model consists of a simplified version of the SSCS that is designed to minimize the computational effort. The simplified model is well-suited to perform sensitivity analyses for studying the phenomena that lead to structural collapse. In particular, the simplified model is used to obtain a deeper understanding of the role played by geometric and material modeling parameters such as mesh size, inter-laminar strength, fracture toughness, and fracture mode mixity. Finally, a global/local damage analysis method is proposed in which a detailed local model is used to scan the global model to identify the locations that are most critical for damage tolerance.
Approach to identifying pollutant source and matching flow field
NASA Astrophysics Data System (ADS)
Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang
2013-07-01
Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.
Computing an upper bound on contact stress with surrogate duality
NASA Astrophysics Data System (ADS)
Xuan, Zhaocheng; Papadopoulos, Panayiotis
2016-07-01
We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.
Application of wall-models to discontinuous Galerkin LES
NASA Astrophysics Data System (ADS)
Frère, Ariane; Carton de Wiart, Corentin; Hillewaert, Koen; Chatelain, Philippe; Winckelmans, Grégoire
2017-08-01
Wall-resolved Large-Eddy Simulations (LES) are still limited to moderate Reynolds number flows due to the high computational cost required to capture the inner part of the boundary layer. Wall-modeled LES (WMLES) provide more affordable LES by modeling the near-wall layer. Wall function-based WMLES solve LES equations up to the wall, where the coarse mesh resolution essentially renders the calculation under-resolved. This makes the accuracy of WMLES very sensitive to the behavior of the numerical method. Therefore, best practice rules regarding the use and implementation of WMLES cannot be directly transferred from one methodology to another regardless of the type of discretization approach. Whilst numerous studies present guidelines on the use of WMLES, there is a lack of knowledge for discontinuous finite-element-like high-order methods. Incidentally, these methods are increasingly used on the account of their high accuracy on unstructured meshes and their strong computational efficiency. The present paper proposes best practice guidelines for the use of WMLES in these methods. The study is based on sensitivity analyses of turbulent channel flow simulations by means of a Discontinuous Galerkin approach. It appears that good results can be obtained without the use of a spatial or temporal averaging. The study confirms the importance of the wall function input data location and suggests to take it at the bottom of the second off-wall element. These data being available through the ghost element, the suggested method prevents the loss of computational scalability experienced in unstructured WMLES. The study also highlights the influence of the polynomial degree used in the wall-adjacent element. It should preferably be of even degree as using polynomials of degree two in the first off-wall element provides, surprisingly, better results than using polynomials of degree three.
NASA Astrophysics Data System (ADS)
Raeder, K.; Hoar, T. J.; Anderson, J. L.; Collins, N.; Hendricks, J.; Kershaw, H.; Ha, S.; Snyder, C.; Skamarock, W. C.; Mizzi, A. P.; Liu, H.; Liu, J.; Pedatella, N. M.; Karspeck, A. R.; Karol, S. I.; Bitz, C. M.; Zhang, Y.
2017-12-01
The capabilities of the Data Assimilation Research Testbed (DART) at NCAR have been significantly expanded with the recent "Manhattan" release. DART is an ensemble Kalman filter based suite of tools, which enables researchers to use data assimilation (DA) without first becoming DA experts. Highlights: significant improvement in efficient ensemble DA for very large models on thousands of processors, direct read and write of model state files in parallel, more control of the DA output for finer-grained analysis, new model interfaces which are useful to a variety of geophysical researchers, new observation forward operators and the ability to use precomputed forward operators from the forecast model. The new model interfaces and example applications include the following: MPAS-A; Model for Prediction Across Scales - Atmosphere is a global, nonhydrostatic, variable-resolution mesh atmospheric model, which facilitates multi-scale analysis and forecasting. The absence of distinct subdomains eliminates problems associated with subdomain boundaries. It demonstrates the ability to consistently produce higher-quality analyses than coarse, uniform meshes do. WRF-Chem; Weather Research and Forecasting + (MOZART) Chemistry model assimilates observations from FRAPPÉ (Front Range Air Pollution and Photochemistry Experiment). WACCM-X; Whole Atmosphere Community Climate Model with thermosphere and ionosphere eXtension assimilates observations of electron density to investigate sudden stratospheric warming. CESM (weakly) coupled assimilation; NCAR's Community Earth System Model is used for assimilation of atmospheric and oceanic observations into their respective components using coupled atmosphere+land+ocean+sea+ice forecasts. CESM2.0; Assimilation in the atmospheric component (CAM, WACCM) of the newly released version is supported. This version contains new and extensively updated components and software environment. CICE; Los Alamos sea ice model (in CESM) is used to assimilate multivariate sea ice concentration observations to constrain the model's ice thickness, concentration, and parameters.
Transport of phase space densities through tetrahedral meshes using discrete flow mapping
NASA Astrophysics Data System (ADS)
Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor
2017-01-01
Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.
NASA Astrophysics Data System (ADS)
Xiang, Meisu; Jiang, Meihuizi; Zhang, Yanzong; Liu, Yan; Shen, Fei; Yang, Gang; He, Yan; Wang, Lilin; Zhang, Xiaohong; Deng, Shihuai
2018-03-01
A novel superhydrophobic and superoleophilic surface was fabricated by one-step electrodeposition on stainless steel meshes, and the durability and oil/water separation properties were assessed. Field emission scanning electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), fourier transform infrared spectroscopy (FT-IR) and optical contact angle measurements were used to characterize surface morphologies, chemical compositions, and wettabilities, respectively. The results indicated that the as-prepared mesh preformed excellent superhydrophobicity and superoleophilicity with a high water contact angle (WCA) of 162 ± 1° and oil contact angle of (OCA) 0°. Meanwhile, the as-prepared mesh also exhibited continuous separation capacity of many kinds of oil/water mixtures, and the separation efficiency for lubrication oil/water mixture was about 98.6%. In addition, after 10 separation cycles, the as-prepared mesh possessed the WCAs of 155 ± 2°, the OCAs of 0° and the separation efficiency of 97.8% for lubrication oil/water mixtures. The as-prepared mesh also retained superhydrophobic and superoleophilic properties after abrading, immersing in salt solutions and different pH solutions.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Efficient generation of discontinuity-preserving adaptive triangulations from range images.
Garcia, Miguel Angel; Sappa, Angel Domingo
2004-10-01
This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.
Li, Jian; Kang, Ruimei; Tang, Xiaohua; She, Houde; Yang, Yaoxia; Zha, Fei
2016-04-14
Oil-polluted water has become a worldwide problem due to increasing industrial oily wastewater as well as frequent oil-spill pollution. Compared with underwater superoleophobic (water-removing) filtration membranes, superhydrophobic/superoleophilic (oil-removing) materials have advantages as they can be used for the filtration of heavy oil or the absorption of floating oil from water/oil mixtures. However, most of the superhydrophobic materials used for oil/water separation lose their superhydrophobicity when exposed to hot (e.g. >50 °C) water and strong corrosive liquids. Herein, we demonstrate superhydrophobic overlapped candle soot (CS) and silica coated meshes that can repel hot water (about 92 °C) and strong corrosive liquids, and were used for the gravity driven separation of oil-water mixtures in hot water and strong acidic, alkaline, and salty environments. To the best of our knowledge, we are unaware of any previously reported studies on the use of superhydrophobic materials for the separation of oil from hot water and corrosive aqueous media. In addition, the as-prepared robust superhydrophobic CS and silica coated meshes can separate a series of oils and organic solvents like kerosene, toluene, petroleum ether, heptane and chloroform from water with a separation efficiency larger than 99.0%. Moreover, the as-prepared coated mesh still maintained a separation efficiency above 98.5% and stable recyclability after 55 cycles of separation. The robust superhydrophobic meshes developed in this work can therefore be practically used as a highly efficient filtration membrane for the separation of oil from harsh water conditions, benefiting the environment and human health.
NASA Astrophysics Data System (ADS)
Kardan, Farshid; Cheng, Wai-Chi; Baverel, Olivier; Porté-Agel, Fernando
2016-04-01
Understanding, analyzing and predicting meteorological phenomena related to urban planning and built environment are becoming more essential than ever to architectural and urban projects. Recently, various version of RANS models have been established but more validation cases are required to confirm their capability for wind flows. In the present study, the performance of recently developed RANS models, including the RNG k-ɛ , SST BSL k-ω and SST ⪆mma-Reθ , have been evaluated for the flow past a single block (which represent the idealized architecture scale). For validation purposes, the velocity streamlines and the vertical profiles of the mean velocities and variances were compared with published LES and wind tunnel experiment results. Furthermore, other additional CFD simulations were performed to analyze the impact of regular/irregular mesh structures and grid resolutions based on selected turbulence model in order to analyze the grid independency. Three different grid resolutions (coarse, medium and fine) of Nx × Ny × Nz = 320 × 80 × 320, 160 × 40 × 160 and 80 × 20 × 80 for the computational domain and nx × nz = 26 × 32, 13 × 16 and 6 × 8, which correspond to number of grid points on the block edges, were chosen and tested. It can be concluded that among all simulated RANS models, the SST ⪆mma-Reθ model performed best and agreed fairly well to the LES simulation and experimental results. It can also be concluded that the SST ⪆mma-Reθ model provides a very satisfactory results in terms of grid dependency in the fine and medium grid resolutions in both regular and irregular structure meshes. On the other hand, despite a very good performance of the RNG k-ɛ model in the fine resolution and in the regular structure grids, a disappointing performance of this model in the coarse and medium grid resolutions indicates that the RNG k-ɛ model is highly dependent on grid structure and grid resolution. These quantitative validations are essential to access the accuracy of RANS models for the simulation of flow in urban environment.
COARSE PM EMISSIONS MODEL DEVELOPMENT AND INVENTORY VALIDATION
The proposed research will contribute to our understanding of the sources and controlling variables of coarse PM. This greater understanding, along with an increase in our ability to predict these emissions, will enable more efficient pollution control strategy development. Ad...
Unstructured mesh methods for CFD
NASA Technical Reports Server (NTRS)
Peraire, J.; Morgan, K.; Peiro, J.
1990-01-01
Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.
Linear and nonlinear pattern selection in Rayleigh-Benard stability problems
NASA Technical Reports Server (NTRS)
Davis, Sanford S.
1993-01-01
A new algorithm is introduced to compute finite-amplitude states using primitive variables for Rayleigh-Benard convection on relatively coarse meshes. The algorithm is based on a finite-difference matrix-splitting approach that separates all physical and dimensional effects into one-dimensional subsets. The nonlinear pattern selection process for steady convection in an air-filled square cavity with insulated side walls is investigated for Rayleigh numbers up to 20,000. The internalization of disturbances that evolve into coherent patterns is investigated and transient solutions from linear perturbation theory are compared with and contrasted to the full numerical simulations.
Gulzari, Usman Ali; Sajid, Muhammad; Anjum, Sheraz; Agha, Shahrukh; Torres, Frank Sill
2016-01-01
A Mesh topology is one of the most promising architecture due to its regular and simple structure for on-chip communication. Performance of mesh topology degraded greatly by increasing the network size due to small bisection width and large network diameter. In order to overcome this limitation, many researchers presented modified Mesh design by adding some extra links to improve its performance in terms of network latency and power consumption. The Cross-By-Pass-Mesh was presented by us as an improved version of Mesh topology by intelligent addition of extra links. This paper presents an efficient topology named Cross-By-Pass-Torus for further increase in the performance of the Cross-By-Pass-Mesh topology. The proposed design merges the best features of the Cross-By-Pass-Mesh and Torus, to reduce the network diameter, minimize the average number of hops between nodes, increase the bisection width and to enhance the overall performance of the network. In this paper, the architectural design of the topology is presented and analyzed against similar kind of 2D topologies in terms of average latency, throughput and power consumption. In order to certify the actual behavior of proposed topology, the synthetic traffic trace and five different real embedded application workloads are applied to the proposed as well as other competitor network topologies. The simulation results indicate that Cross-By-Pass-Torus is an efficient candidate among its predecessor's and competitor topologies due to its less average latency and increased throughput at a slight cost in network power and energy for on-chip communication.
Merge measuring mesh for complex surface parts
NASA Astrophysics Data System (ADS)
Ye, Jianhua; Gao, Chenghui; Zeng, Shoujin; Xu, Mingsan
2018-04-01
Due to most parts self-occlude and limitation of scanner range, it is difficult to scan the entire part by one time. For modeling of part, multi measuring meshes need to be merged. In this paper, a new merge method is presented. At first, using the grid voxelization method to eliminate the most of non-overlap regions, and retrieval overlap triangles method by the topology of mesh is proposed due to its ability to improve the efficiency. Then, to remove the large deviation of overlap triangles, deleting by overlap distance is discussion. After that, this paper puts forward a new method of merger meshes by registration and combination mesh boundary point. Through experimental analysis, the suggested methods are effective.
Multitasking for flows about multiple body configurations using the chimera grid scheme
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Morgan, R. L.
1987-01-01
The multitasking of a finite-difference scheme using multiple overset meshes is described. In this chimera, or multiple overset mesh approach, a multiple body configuration is mapped using a major grid about the main component of the configuration, with minor overset meshes used to map each additional component. This type of code is well suited to multitasking. Both steady and unsteady two dimensional computations are run on parallel processors on a CRAY-X/MP 48, usually with one mesh per processor. Flow field results are compared with single processor results to demonstrate the feasibility of running multiple mesh codes on parallel processors and to show the increase in efficiency.
New software developments for quality mesh generation and optimization from biomedical imaging data.
Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko
2014-01-01
In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
An efficient technique for the numerical solution of the bidomain equations.
Whiteley, Jonathan P
2008-08-01
Computing the numerical solution of the bidomain equations is widely accepted to be a significant computational challenge. In this study we extend a previously published semi-implicit numerical scheme with good stability properties that has been used to solve the bidomain equations (Whiteley, J.P. IEEE Trans. Biomed. Eng. 53:2139-2147, 2006). A new, efficient numerical scheme is developed which utilizes the observation that the only component of the ionic current that must be calculated on a fine spatial mesh and updated frequently is the fast sodium current. Other components of the ionic current may be calculated on a coarser mesh and updated less frequently, and then interpolated onto the finer mesh. Use of this technique to calculate the transmembrane potential and extracellular potential induces very little error in the solution. For the simulations presented in this study an increase in computational efficiency of over two orders of magnitude over standard numerical techniques is obtained.
Miller, Thomas F.
2017-01-01
We present a coarse-grained simulation model that is capable of simulating the minute-timescale dynamics of protein translocation and membrane integration via the Sec translocon, while retaining sufficient chemical and structural detail to capture many of the sequence-specific interactions that drive these processes. The model includes accurate geometric representations of the ribosome and Sec translocon, obtained directly from experimental structures, and interactions parameterized from nearly 200 μs of residue-based coarse-grained molecular dynamics simulations. A protocol for mapping amino-acid sequences to coarse-grained beads enables the direct simulation of trajectories for the co-translational insertion of arbitrary polypeptide sequences into the Sec translocon. The model reproduces experimentally observed features of membrane protein integration, including the efficiency with which polypeptide domains integrate into the membrane, the variation in integration efficiency upon single amino-acid mutations, and the orientation of transmembrane domains. The central advantage of the model is that it connects sequence-level protein features to biological observables and timescales, enabling direct simulation for the mechanistic analysis of co-translational integration and for the engineering of membrane proteins with enhanced membrane integration efficiency. PMID:28328943
Highly Symmetric and Congruently Tiled Meshes for Shells and Domes
Rasheed, Muhibur; Bajaj, Chandrajit
2016-01-01
We describe the generation of all possible shell and dome shapes that can be uniquely meshed (tiled) using a single type of mesh face (tile), and following a single meshing (tiling) rule that governs the mesh (tile) arrangement with maximal vertex, edge and face symmetries. Such tiling arrangements or congruently tiled meshed shapes, are frequently found in chemical forms (fullerenes or Bucky balls, crystals, quasi-crystals, virus nano shells or capsids), and synthetic shapes (cages, sports domes, modern architectural facades). Congruently tiled meshes are both aesthetic and complete, as they support maximal mesh symmetries with minimal complexity and possess simple generation rules. Here, we generate congruent tilings and meshed shape layouts that satisfy these optimality conditions. Further, the congruent meshes are uniquely mappable to an almost regular 3D polyhedron (or its dual polyhedron) and which exhibits face-transitive (and edge-transitive) congruency with at most two types of vertices (each type transitive to the other). The family of all such congruently meshed polyhedra create a new class of meshed shapes, beyond the well-studied regular, semi-regular and quasi-regular classes, and their duals (platonic, Catalan and Johnson). While our new mesh class is infinite, we prove that there exists a unique mesh parametrization, where each member of the class can be represented by two integer lattice variables, and moreover efficiently constructable. PMID:27563368
Guidelines and Parameter Selection for the Simulation of Progressive Delamination
NASA Technical Reports Server (NTRS)
Song, Kyongchan; Davila, Carlos G.; Rose, Cheryl A.
2008-01-01
Turon s methodology for determining optimal analysis parameters for the simulation of progressive delamination is reviewed. Recommended procedures for determining analysis parameters for efficient delamination growth predictions using the Abaqus/Standard cohesive element and relatively coarse meshes are provided for single and mixed-mode loading. The Abaqus cohesive element, COH3D8, and a user-defined cohesive element are used to develop finite element models of the double cantilever beam specimen, the end-notched flexure specimen, and the mixed-mode bending specimen to simulate progressive delamination growth in Mode I, Mode II, and mixed-mode fracture, respectively. The predicted responses are compared with their analytical solutions. The results show that for single-mode fracture, the predicted responses obtained with the Abaqus cohesive element correlate well with the analytical solutions. For mixed-mode fracture, it was found that the response predicted using COH3D8 elements depends on the damage evolution criterion that is used. The energy-based criterion overpredicts the peak loads and load-deflection response. The results predicted using a tabulated form of the BK criterion correlate well with the analytical solution and with the results predicted with the user-written element.
Finite element meshing approached as a global minimization process
DOE Office of Scientific and Technical Information (OSTI.GOV)
WITKOWSKI,WALTER R.; JUNG,JOSEPH; DOHRMANN,CLARK R.
2000-03-01
The ability to generate a suitable finite element mesh in an automatic fashion is becoming the key to being able to automate the entire engineering analysis process. However, placing an all-hexahedron mesh in a general three-dimensional body continues to be an elusive goal. The approach investigated in this research is fundamentally different from any other that is known of by the authors. A physical analogy viewpoint is used to formulate the actual meshing problem which constructs a global mathematical description of the problem. The analogy used was that of minimizing the electrical potential of a system charged particles within amore » charged domain. The particles in the presented analogy represent duals to mesh elements (i.e., quads or hexes). Particle movement is governed by a mathematical functional which accounts for inter-particles repulsive, attractive and alignment forces. This functional is minimized to find the optimal location and orientation of each particle. After the particles are connected a mesh can be easily resolved. The mathematical description for this problem is as easy to formulate in three-dimensions as it is in two- or one-dimensions. The meshing algorithm was developed within CoMeT. It can solve the two-dimensional meshing problem for convex and concave geometries in a purely automated fashion. Investigation of the robustness of the technique has shown a success rate of approximately 99% for the two-dimensional geometries tested. Run times to mesh a 100 element complex geometry were typically in the 10 minute range. Efficiency of the technique is still an issue that needs to be addressed. Performance is an issue that is critical for most engineers generating meshes. It was not for this project. The primary focus of this work was to investigate and evaluate a meshing algorithm/philosophy with efficiency issues being secondary. The algorithm was also extended to mesh three-dimensional geometries. Unfortunately, only simple geometries were tested before this project ended. The primary complexity in the extension was in the connectivity problem formulation. Defining all of the interparticle interactions that occur in three-dimensions and expressing them in mathematical relationships is very difficult.« less
Innovations in the flotation of fine and coarse particles
NASA Astrophysics Data System (ADS)
Fornasiero, D.; Filippov, L. O.
2017-07-01
Research on the mechanisms of particle-bubble interaction has provided valuable information on how to improve the flotation of fine (<20 µm) and coarse particles (>100 µm) with novel flotation machines which provide higher collision and attachment efficiencies of fine particles with bubbles and lower detachment of the coarse particles. Also, new grinding methods and technologies have reduced energy consumption in mining and produced better mineral liberation and therefore flotation performance.
NASA Astrophysics Data System (ADS)
Bradley, A. M.
2013-12-01
My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element larger than itself. A mesh generated by a family of quadtrees has this property. Using multiple quadtrees that collectively cover the domain enables the elements to have a small aspect ratio. Mathematically, IGA works as follows. Let Mn be the nonuniform mesh. Define Mu to be the uniform mesh that is composed of the smallest element in Mn. Every element e in Mu has associated subelements in Mn that tile e. First, a linear operator Inu mapping data on Mn to Mu implements smooth (C^1) interpolation; I use cubic (Clough-Tocher) interpolation over a triangulation induced by Mn. Second, a linear operator Gu implements DDMu on Mu. Third, a linear operator Aun maps data on Mu to Mn. These three linear operators implement exact IGA (EIGA): Gn = Aun Gu Inu. Computationally, there are several more details. EIGA has the undesirable property that calculating one entry of Gn for receiver ri requires calculating multiple entries of Gu, no matter how far away from ri the smallest element is. Approximate IGA (AIGA) solves this problem by restricting EIGA to a neighborhood around each receiver. Associated with each neighborhood is a minimum element size s^i that indexes a family of operators Gu^i. The order of accuracy of AIGA is the same as that of EIGA and DDMu if each neighborhood is kept constant in spatial extent as the mesh is refined.
A unified data representation theory for network visualization, ordering and coarse-graining
Kovács, István A.; Mizsei, Réka; Csermely, Péter
2015-01-01
Representation of large data sets became a key question of many scientific disciplines in the last decade. Several approaches for network visualization, data ordering and coarse-graining accomplished this goal. However, there was no underlying theoretical framework linking these problems. Here we show an elegant, information theoretic data representation approach as a unified solution of network visualization, data ordering and coarse-graining. The optimal representation is the hardest to distinguish from the original data matrix, measured by the relative entropy. The representation of network nodes as probability distributions provides an efficient visualization method and, in one dimension, an ordering of network nodes and edges. Coarse-grained representations of the input network enable both efficient data compression and hierarchical visualization to achieve high quality representations of larger data sets. Our unified data representation theory will help the analysis of extensive data sets, by revealing the large-scale structure of complex networks in a comprehensible form. PMID:26348923
NASA Astrophysics Data System (ADS)
Zhao, Yichao; Xiao, Xinyan; Ye, Zhihao; Ji, Qiang; Xie, Wei
2018-02-01
A mechanical durable superhydrophobic copper-plated stainless steel mesh was successfully fabricated by an electrodeposition process and 1-octadecanethiol modification. The as-prepared superhydrophobic mesh displays water contact angle of 153° and shows excellent anti-corrosion and water-oil separation properties in the condition of 0.1 A/cm2 current density for 35 s. In comparison with bare stainless steel mesh, the corrosion current of the as-prepared superhydrophobic mesh is close to 1/6 of the former. Meanwhile, the as-prepared superhydrophobic mesh could continuously separate oil from oil-water mixtures. The separation efficiency of continuous separation is as high as 96% and shows less than 1% decrease after ten cycles.
NASA Astrophysics Data System (ADS)
Guo, Tongqing; Chen, Hao; Lu, Zhiliang
2018-05-01
Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.
Blake, Margaret Lehman; Tompkins, Connie A.; Scharp, Victoria L.; Meigh, Kimberly M.; Wambaugh, Julie
2014-01-01
Coarse coding is the activation of broad semantic fields that can include multiple word meanings and a variety of features, including those peripheral to a word’s core meaning. It is a partially domain-general process related to general discourse comprehension and contributes to both literal and non-literal language processing. Adults with damage to the right cerebral hemisphere (RHD) and a coarse coding deficit are particularly slow to activate features of words that are relatively distant or peripheral. This manuscript reports a pre-efficacy study of Contextual Constraint Treatment (CCT), a novel, implicit treatment designed to increase the efficiency of coarse coding with the goal of improving narrative comprehension and other language performance that relies on coarse coding. Participants were four adults with RHD. The study used a single-subject controlled experimental design across subjects and behaviors. The treatment involves pre-stimulation, using a hierarchy of strong- and moderately-biased contexts, to prime the intended distantly-related features of critical stimulus words. Three of the four participants exhibited gains in auditory narrative discourse comprehension, the primary outcome measure. All participants exhibited generalization to untreated items. No strong generalization to processing nonliteral language was evident. The results indicate that CCT yields both improved efficiency of the coarse coding process and generalization to narrative comprehension. PMID:24983133
Santana, Jose; Marrero, Domingo; Macías, Elsa; Mena, Vicente; Suárez, Álvaro
2017-07-21
Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces.
Marrero, Domingo; Macías, Elsa; Mena, Vicente
2017-01-01
Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces. PMID:28754013
Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems
NASA Astrophysics Data System (ADS)
Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine
2016-12-01
A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.
Patch-based Adaptive Mesh Refinement for Multimaterial Hydrodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lomov, I; Pember, R; Greenough, J
2005-10-18
We present a patch-based direct Eulerian adaptive mesh refinement (AMR) algorithm for modeling real equation-of-state, multimaterial compressible flow with strength. Our approach to AMR uses a hierarchical, structured grid approach first developed by (Berger and Oliger 1984), (Berger and Oliger 1984). The grid structure is dynamic in time and is composed of nested uniform rectangular grids of varying resolution. The integration scheme on the grid hierarchy is a recursive procedure in which the coarse grids are advanced, then the fine grids are advanced multiple steps to reach the same time, and finally the coarse and fine grids are synchronized tomore » remove conservation errors during the separate advances. The methodology presented here is based on a single grid algorithm developed for multimaterial gas dynamics by (Colella et al. 1993), refined by(Greenough et al. 1995), and extended to the solution of solid mechanics problems with significant strength by (Lomov and Rubin 2003). The single grid algorithm uses a second-order Godunov scheme with an approximate single fluid Riemann solver and a volume-of-fluid treatment of material interfaces. The method also uses a non-conservative treatment of the deformation tensor and an acoustic approximation for shear waves in the Riemann solver. This departure from a strict application of the higher-order Godunov methodology to the equation of solid mechanics is justified due to the fact that highly nonlinear behavior of shear stresses is rare. This algorithm is implemented in two codes, Geodyn and Raptor, the latter of which is a coupled rad-hydro code. The present discussion will be solely concerned with hydrodynamics modeling. Results from a number of simulations for flows with and without strength will be presented.« less
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
A voxel-based finite element model for the prediction of bladder deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.
2012-01-15
Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classicalmore » FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to manual contours and <0.02 cm difference in mean standard deviation of residual errors). The average equation solving time (without manual intervention) for the first two types of hexahedral meshes increased to 2.3 h and 2.6 h compared to the 1.1 h needed for the tetrahedral mesh, however, the low-resolution nonuniform hexahedral mesh dramatically decreased the equation solving time to 3 min without reducing accuracy. Conclusions: Voxel-based mesh generation allows fast, automatic, and robust creation of finite element bladder models directly from binary segmentation images without user intervention. Even the low-resolution voxel-based hexahedral mesh yields comparable accuracy in bladder shape prediction and more than 20 times faster in computational speed compared to the tetrahedral mesh. This approach makes it more feasible and accessible to apply FE method to model bladder deformation in adaptive radiotherapy.« less
Kim, Wanjung; Kim, Soyeon; Kang, Iljoong; Jung, Myung Sun; Kim, Sung June; Kim, Jung Kyu; Cho, Sung Min; Kim, Jung-Hyun; Park, Jong Hyeok
2016-05-10
Herein, we report a tailored Ag mesh electrode coated with poly(3,4-ethylenedioxythiophene) (PEDOT) polymer on a flexible polyethylene terephthalate (PET) substrate. The introduction of this highly conductive polymer solves the existing problems of Ag mesh-type transparent conductive electrodes, such as high pitch, roughness, current inhomogeneity, and adhesion problems between the Ag mesh grid and PEDOT polymer or PET substrate, to result in excellent electron spreading from the discrete Ag mesh onto the entire surface without sacrificing sheet conductivity and optical transparency. Based on this hybrid anode, we demonstrate highly efficient flexible polymer solar cells (PSCs) with a high fill factor of 67.11 %, which results in a power conversion efficiency (PCE) of 6.9 % based on a poly({4,8-bis[(2-ethylhexyl)oxy]benzo[1,2-b:4,5-b'] dithiophene-2,6-diyl}{3-fluoro-2-[(2-ethylhexyl) carbonyl]thieno[3,4-b]thiophenediyl}):[6,6]-phenyl-C71 -butyric acid methyl ester bulk heterojunction device. Furthermore, the PSC device with the Ag mesh electrode also exhibits a good mechanical bending stability, as indicated by a 70 % retention of the initial PCE after 500 bending cycles compared with the PSC device with a PET/indium tin oxide electrode, which retained 0 % of the initial PCE after 300 bending cycles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
NASA Astrophysics Data System (ADS)
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling.
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-02-07
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp-Davis-Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations.
4D cone-beam CT reconstruction using multi-organ meshes for sliding motion modeling
Zhong, Zichun; Gu, Xuejun; Mao, Weihua; Wang, Jing
2016-01-01
A simultaneous motion estimation and image reconstruction (SMEIR) strategy was proposed for 4D cone-beam CT (4D-CBCT) reconstruction and showed excellent results in both phantom and lung cancer patient studies. In the original SMEIR algorithm, the deformation vector field (DVF) was defined on voxel grid and estimated by enforcing a global smoothness regularization term on the motion fields. The objective of this work is to improve the computation efficiency and motion estimation accuracy of SMEIR for 4D-CBCT through developing a multi-organ meshing model. Feature-based adaptive meshes were generated to reduce the number of unknowns in the DVF estimation and accurately capture the organ shapes and motion. Additionally, the discontinuity in the motion fields between different organs during respiration was explicitly considered in the multi-organ mesh model. This will help with the accurate visualization and motion estimation of the tumor on the organ boundaries in 4D-CBCT. To further improve the computational efficiency, a GPU-based parallel implementation was designed. The performance of the proposed algorithm was evaluated on a synthetic sliding motion phantom, a 4D NCAT phantom, and four lung cancer patients. The proposed multi-organ mesh based strategy outperformed the conventional Feldkamp–Davis–Kress, iterative total variation minimization, original SMEIR and single meshing method based on both qualitative and quantitative evaluations. PMID:26758496
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
A Numerical Study of Mesh Adaptivity in Multiphase Flows with Non-Newtonian Fluids
NASA Astrophysics Data System (ADS)
Percival, James; Pavlidis, Dimitrios; Xie, Zhihua; Alberini, Federico; Simmons, Mark; Pain, Christopher; Matar, Omar
2014-11-01
We present an investigation into the computational efficiency benefits of dynamic mesh adaptivity in the numerical simulation of transient multiphase fluid flow problems involving Non-Newtonian fluids. Such fluids appear in a range of industrial applications, from printing inks to toothpastes and introduce new challenges for mesh adaptivity due to the additional ``memory'' of viscoelastic fluids. Nevertheless, the multiscale nature of these flows implies huge potential benefits for a successful implementation. The study is performed using the open source package Fluidity, which couples an unstructured mesh control volume finite element solver for the multiphase Navier-Stokes equations to a dynamic anisotropic mesh adaptivity algorithm, based on estimated solution interpolation error criteria, and conservative mesh-to-mesh interpolation routine. The code is applied to problems involving rheologies ranging from simple Newtonian to shear-thinning to viscoelastic materials and verified against experimental data for various industrial and microfluidic flows. This work was undertaken as part of the EPSRC MEMPHIS programme grant EP/K003976/1.
NASA Astrophysics Data System (ADS)
Tao, Y. B.; Liu, Y. W.; Gao, F.; Chen, X. Y.; He, Y. L.
2009-09-01
An anisotropic porous media model for mesh regenerator used in pulse tube refrigerator (PTR) is established. Formulas for permeability and Forchheimer coefficient are derived which include the effects of regenerator configuration and geometric parameters, oscillating flow, operating frequency, cryogenic temperature. Then, the fluid flow and heat transfer performances of mesh regenerator are numerically investigated under different mesh geometric parameters and material properties. The results indicate that the cooling power of the PTR increases with the increases of specific heat capacity and density of the regenerator mesh material, and decreases with the increases of penetration depth and thermal conductivity ratio ( a). The cooling power at a = 0.1 is 0.5-2.0 W higher than that at a = 1. Optimizing the filling scale of different mesh configurations (such as 75% #200 twill and 25% #250 twill) and adopting multi segments regenerator with stainless steel meshes at the cold end can enhance the regenerator's efficiency and achieve better heat transfer performance.
A Moving Mesh Finite Element Algorithm for Singular Problems in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Li, Ruo; Tang, Tao; Zhang, Pingwen
2002-04-01
A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In a recent work (2001, J. Comput. Phys.170, 562-588), we extended Dvinsky's method to provide an efficient moving mesh algorithm which compared favorably with the previously proposed schemes in terms of simplicity and reliability. In this work, we will further extend the moving mesh methods based on harmonic maps to deal with mesh adaptation in three space dimensions. In obtaining the variational mesh, we will solve an optimization problem with some appropriate constraints, which is in contrast to the traditional method of solving the Euler-Lagrange equation directly. The key idea of this approach is to update the interior and boundary grids simultaneously, rather than considering them separately. Application of the proposed moving mesh scheme is illustrated with some two- and three-dimensional problems with large solution gradients. The numerical experiments show that our methods can accurately resolve detail features of singular problems in 3D.
Martínez, Aingeru; Pérez, Javier; Molinero, Jon; Sagarduy, Mikel; Pozo, Jesús
2015-01-15
Although temporary streams represent a high proportion of the total number and length of running waters, historically the study of intermittent streams has received less attention than that of perennial ones. The goal of the present study was to assess the effects of flow cessation on litter decomposition in calcareous streams under oceanic climate conditions. For this, leaf litter of alder was incubated in four streams (S1, S2, S3 and S4) with different flow regimes (S3 and S4 with zero-flow periods) from northern Spain. To distinguish the relative importance and contribution of decomposers and detritivores, fine- and coarse-mesh litter bags were used. We determined processing rates, leaf-C, -N and -P concentrations, invertebrate colonization in coarse bags and benthic invertebrates. Decomposition rates in fine bags were similar among streams. In coarse bags, only one of the intermittent streams, S4, showed a lower rate than that in the other ones as a consequence of lower invertebrate colonization. The material incubated in fine bags presented higher leaf-N and -P concentrations than those in the coarse ones, except in S4, pointing out that the decomposition in this stream was driven mainly by microorganisms. Benthic macroinvertebrate and shredder density and biomass were lower in intermittent streams than those in perennial ones. However, the bags in S3 presented a greater amount of total macroinvertebrates and shredders comparing with the benthos. The most suitable explanation is that the fauna find a food substrate in bags less affected by calcite precipitation, which is common in the streambed at this site. Decomposition rate in coarse bags was positively related to associated shredder biomass. Thus, droughts in streams under oceanic climate conditions affect mainly the macroinvertebrate detritivore activity, although macroinvertebrates may show distinct behavior imposed by the physicochemical properties of water, mainly travertine precipitation, which can override the flow intermittence effects. Copyright © 2014. Published by Elsevier B.V.
Sander, S; Behnisch, J; Wagner, M
2017-02-01
With the MBBR IFAS (moving bed biofilm reactor integrated fixed-film activated sludge) process, the biomass required for biological wastewater treatment is either suspended or fixed on free-moving plastic carriers in the reactor. Coarse- or fine-bubble aeration systems are used in the MBBR IFAS process. In this study, the oxygen transfer efficiency (OTE) of a coarse-bubble aeration system was improved significantly by the addition of the investigated carriers, even in-process (∼1% per vol-% of added carrier material). In a fine-bubble aeration system, the carriers had little or no effect on OTE. The effect of carriers on OTE strongly depends on the properties of the aeration system, the volumetric filling rate of the carriers, the properties of the carrier media, and the reactor geometry. This study shows that the effect of carriers on OTE is less pronounced in-process compared to clean water conditions. When designing new carriers in order to improve their effect on OTE further, suppliers should take this into account. Although the energy efficiency and cost effectiveness of coarse-bubble aeration systems can be improved significantly by the addition of carriers, fine-bubble aeration systems remain the more efficient and cost-effective alternative for aeration when applying the investigated MBBR IFAS process.
A hierarchical structure for automatic meshing and adaptive FEM analysis
NASA Technical Reports Server (NTRS)
Kela, Ajay; Saxena, Mukul; Perucchio, Renato
1987-01-01
A new algorithm for generating automatically, from solid models of mechanical parts, finite element meshes that are organized as spatially addressable quaternary trees (for 2-D work) or octal trees (for 3-D work) is discussed. Because such meshes are inherently hierarchical as well as spatially addressable, they permit efficient substructuring techniques to be used for both global analysis and incremental remeshing and reanalysis. The global and incremental techniques are summarized and some results from an experimental closed loop 2-D system in which meshing, analysis, error evaluation, and remeshing and reanalysis are done automatically and adaptively are presented. The implementation of 3-D work is briefly discussed.
NASA Astrophysics Data System (ADS)
Bercea, Gheorghe-Teodor; McRae, Andrew T. T.; Ham, David A.; Mitchell, Lawrence; Rathgeber, Florian; Nardi, Luigi; Luporini, Fabio; Kelly, Paul H. J.
2016-10-01
We present a generic algorithm for numbering and then efficiently iterating over the data values attached to an extruded mesh. An extruded mesh is formed by replicating an existing mesh, assumed to be unstructured, to form layers of prismatic cells. Applications of extruded meshes include, but are not limited to, the representation of three-dimensional high aspect ratio domains employed by geophysical finite element simulations. These meshes are structured in the extruded direction. The algorithm presented here exploits this structure to avoid the performance penalty traditionally associated with unstructured meshes. We evaluate the implementation of this algorithm in the Firedrake finite element system on a range of low compute intensity operations which constitute worst cases for data layout performance exploration. The experiments show that having structure along the extruded direction enables the cost of the indirect data accesses to be amortized after 10-20 layers as long as the underlying mesh is well ordered. We characterize the resulting spatial and temporal reuse in a representative set of both continuous-Galerkin and discontinuous-Galerkin discretizations. On meshes with realistic numbers of layers the performance achieved is between 70 and 90 % of a theoretical hardware-specific limit.
Grouper: a compact, streamable triangle mesh data structure.
Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek
2014-01-01
We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As a part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle--i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format--Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.
NASA Astrophysics Data System (ADS)
Lee, S.; Kim, S.; Roh, Y.; Son, Y.
2016-12-01
Tropical forests play a critical role in mitigating climate change, because they sequester carbon more than any other terrestrial ecosystems. In addition, coarse woody debris is one of the main carbon storages, accounting for 10 - 40% of the tropical forest carbon. Carbon in coarse woody debris is released by various activities of organisms, and particularly termite's feeding activities are known to be a main process in tropical forests. Therefore, investigating the effects of termite activities on coarse woody debris decomposition is important to understanding carbon cycles of tropical forests. This study was conducted in an intact lowland mixed dipterocarp forest (MDF) of Brunei Darussalam, and three main MDF tree species (Dillenia beccariana, Macaranga bancana, and Elateriospermum tapos) were selected. Coarse woody debris samples of both 10 cm diameter and length were prepared, and half of samples were covered twice with nylon net (mesh size 1.5 mm × 1.5 mm) to prevent termite's approach. Three permanent plots were installed in January, 2015 and 36 samples per plot (3 species × 2 treatments × 6 replicates) were placed at the soil surface. Weights of each sample were recorded at initial time, and weighed again at an interval of 6 months until July, 2016. On average, uncovered and covered samples lost 32.4 % and 20.0 % of their initial weights, respectively. Weight loss percentage was highest in uncovered samples of M. bancana (43.8 %), and lowest in covered samples of E. tapos (14.7 %). Two-way ANOVA showed that the effects of the tree species and the termite exclusion treatment on coarse woody debris decomposition were statistically significant (P < 0.001). Also the interaction between the tree species and the termite exclusion treatment was significant (P < 0.001). The results reveal that termite activities promote the coarse woody debris decomposition and they influence differently along the tree species. In addition, as a result of repeated ANOVA, weight loss rates were accelerated over time and this time-acceleration effects were significantly different among the tree species (P < 0.05) and the termite exclusion treatment (P< 0.001). * Supported by research grants from the National Research Foundation of Korea (R1D1A1A01) * Supported by BK21Plus Eco-Leader Education Center.
Finding Regions of Interest on Toroidal Meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Kesheng; Sinha, Rishi R; Jones, Chad
2011-02-09
Fusion promises to provide clean and safe energy, and a considerable amount of research effort is underway to turn this aspiration intoreality. This work focuses on a building block for analyzing data produced from the simulation of microturbulence in magnetic confinementfusion devices: the task of efficiently extracting regions of interest. Like many other simulations where a large amount of data are produced,the careful study of ``interesting'' parts of the data is critical to gain understanding. In this paper, we present an efficient approach forfinding these regions of interest. Our approach takes full advantage of the underlying mesh structure in magneticmore » coordinates to produce acompact representation of the mesh points inside the regions and an efficient connected component labeling algorithm for constructingregions from points. This approach scales linearly with the surface area of the regions of interest instead of the volume as shown with bothcomputational complexity analysis and experimental measurements. Furthermore, this new approach is 100s of times faster than a recentlypublished method based on Cartesian coordinates.« less
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS
NASA Astrophysics Data System (ADS)
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L.; Bolch, Wesley E.
2017-06-01
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
Implementation of tetrahedral-mesh geometry in Monte Carlo radiation transport code PHITS.
Furuta, Takuya; Sato, Tatsuhiko; Han, Min Cheol; Yeom, Yeon Soo; Kim, Chan Hyeong; Brown, Justin L; Bolch, Wesley E
2017-06-21
A new function to treat tetrahedral-mesh geometry was implemented in the particle and heavy ion transport code systems. To accelerate the computational speed in the transport process, an original algorithm was introduced to initially prepare decomposition maps for the container box of the tetrahedral-mesh geometry. The computational performance was tested by conducting radiation transport simulations of 100 MeV protons and 1 MeV photons in a water phantom represented by tetrahedral mesh. The simulation was repeated with varying number of meshes and the required computational times were then compared with those of the conventional voxel representation. Our results show that the computational costs for each boundary crossing of the region mesh are essentially equivalent for both representations. This study suggests that the tetrahedral-mesh representation offers not only a flexible description of the transport geometry but also improvement of computational efficiency for the radiation transport. Due to the adaptability of tetrahedrons in both size and shape, dosimetrically equivalent objects can be represented by tetrahedrons with a much fewer number of meshes as compared its voxelized representation. Our study additionally included dosimetric calculations using a computational human phantom. A significant acceleration of the computational speed, about 4 times, was confirmed by the adoption of a tetrahedral mesh over the traditional voxel mesh geometry.
ERIC Educational Resources Information Center
Zhao, Weiyi
2011-01-01
Wireless mesh networks (WMNs) have recently emerged to be a cost-effective solution to support large-scale wireless Internet access. They have numerous applications, such as broadband Internet access, building automation, and intelligent transportation systems. One research challenge for Internet-based WMNs is to design efficient mobility…
Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavel Kus; Pavel Solin; David Andrs
2014-11-01
In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.
Automated array assembly task development of low-cost polysilicon solar cells
NASA Technical Reports Server (NTRS)
Jones, G. T.
1980-01-01
Development of low cost, large area polysilicon solar cells was conducted in this program. Three types of polysilicon materialk were investigated. A theoretical and experimenal comparison between single crystal silicon and polysilicon solar cell efficiency was performed. Significant electrical performance differences were observed between types of wafer material, i.e. fine grain and coarse grain polysilicon and single crystal silicon. Efficiency degradation due to grain boundaries in fin grain and coarse grain polysilicon was shown to be small. It was demonstrated that 10 percent efficient polysilicon solar cells can be produced with spray on n+ dopants. This result fulfills an important goal of this project, which is the production of batch quantity of 10 percent efficient polysilicon solar cells.
Evaluation of the use of a singularity element in finite element analysis of center-cracked plates
NASA Technical Reports Server (NTRS)
Mendelson, A.; Gross, B.; Srawley, J., E.
1972-01-01
Two different methods are applied to the analyses of finite width linear elastic plates with central cracks. Both methods give displacements as a primary part of the solution. One method makes use of Fourier transforms. The second method employs a coarse mesh of triangular second-order finite elements in conjunction with a single singularity element subjected to appropriate additional constraints. The displacements obtained by these two methods are in very good agreement. The results suggest considerable potential for the use of a cracked element for related crack problems, particularly in connection with the extension to nonlinear material behavior.
Summary of results of January climate simulations with the GISS coarse-mesh model
NASA Technical Reports Server (NTRS)
Spar, J.; Cohen, C.; Wu, P.
1981-01-01
The large scale climates generated by extended runs of the model are relatively independent of the initial atmospheric conditions, if the first few months of each simulation are discarded. The perpetual January simulations with a specified SST field produced excessive snow accumulation over the continents of the Northern Hemisphere. Mass exchanges between the cold (warm) continents and the warm (cold) adjacent oceans produced significant surface pressure changes over the oceans as well as over the land. The effect of terrain and terrain elevation on the amount of precipitation was examined. The evaporation of continental moisture was calculated to cause large increases in precipitation over the continents.
The influence of initial and surface boundary conditions on a model-generated January climatology
NASA Technical Reports Server (NTRS)
Wu, K. F.; Spar, J.
1981-01-01
The influence on a model-generated January climate of various surface boundary conditions, as well as initial conditions, was studied by using the GISS coarse-mesh climate model. Four experiments - two with water planets, one with flat continents, and one with mountains - were used to investigate the effects of initial conditions, and the thermal and dynamical effects of the surface on the model generated-climate. However, climatological mean zonal-symmetric sea surface temperature is used in all four runs over the model oceans. Moreover, zero ground wetness and uniform ground albedo except for snow are used in the last experiments.
NASA Technical Reports Server (NTRS)
Spar, J.; Cohen, C.
1981-01-01
The effects of terrain elevation, soil moisture, and zonal variations in sea/surface temperature on the mean daily precipitation rates over Australia, Africa, and South America in January were evaluated. It is suggested that evaporation of soil moisture may either increase or decrease the model generated precipitation, depending on the surface albedo. It was found that a flat, dry continent model best simulates the January rainfall over Australia and South America, while over Africa the simulation is improved by the inclusion of surface physics, specifically soil moisture and albedo variations.
NASA Astrophysics Data System (ADS)
Mössinger, Peter; Jester-Zürker, Roland; Jung, Alexander
2015-01-01
Numerical investigations of hydraulic turbo machines under steady-state conditions are state of the art in current product development processes. Nevertheless allow increasing computational resources refined discretization methods, more sophisticated turbulence models and therefore better predictions of results as well as the quantification of existing uncertainties. Single stage investigations are done using in-house tools for meshing and set-up procedure. Beside different model domains and a mesh study to reduce mesh dependencies, the variation of several eddy viscosity and Reynolds stress turbulence models are investigated. All obtained results are compared with available model test data. In addition to global values, measured magnitudes in the vaneless space, at runner blade and draft tube positions in term of pressure and velocity are considered. From there it is possible to estimate the influence and relevance of various model domains depending on different operating points and numerical variations. Good agreement can be found for pressure and velocity measurements with all model configurations and, except the BSL-RSM model, all turbulence models. At part load, deviations in hydraulic efficiency are at a large magnitude, whereas at best efficiency and high load operating point efficiencies are close to the measurement. A consideration of the runner side gap geometry as well as a refined mesh is able to improve the results either in relation to hydraulic efficiency or velocity distribution with the drawbacks of less stable numerics and increasing computational time.
NASA Astrophysics Data System (ADS)
Salvalaglio, Marco; Backofen, Rainer; Voigt, Axel; Elder, Ken R.
2017-08-01
One of the major difficulties in employing phase-field crystal (PFC) modeling and the associated amplitude (APFC) formulation is the ability to tune model parameters to match experimental quantities. In this work, we address the problem of tuning the defect core and interface energies in the APFC formulation. We show that the addition of a single term to the free-energy functional can be used to increase the solid-liquid interface and defect energies in a well-controlled fashion, without any major change to other features. The influence of the newly added term is explored in two-dimensional triangular and honeycomb structures as well as bcc and fcc lattices in three dimensions. In addition, a finite-element method (FEM) is developed for the model that incorporates a mesh refinement scheme. The combination of the FEM and mesh refinement to simulate amplitude expansion with a new energy term provides a method of controlling microscopic features such as defect and interface energies while simultaneously delivering a coarse-grained examination of the system.
Polycaprolactone electrospun mesh conjugated with an MSC affinity peptide for MSC homing in vivo.
Shao, Zhenxing; Zhang, Xin; Pi, Yanbin; Wang, Xiaokun; Jia, Zhuqing; Zhu, Jingxian; Dai, Linghui; Chen, Wenqing; Yin, Ling; Chen, Haifeng; Zhou, Chunyan; Ao, Yingfang
2012-04-01
Mesenchymal stem cell (MSC) is a promising cell source candidate in tissue engineering (TE) and regenerative medicine. However, the inability to target MSCs in tissues of interest with high efficiency and engraftment has become a significant barrier for MSC-based therapies. The mobilization and transfer of MSCs to defective/damaged sites in tissues or organs in vivo with high efficacy and efficiency has been a major concern. In the present study, we identified a peptide sequence (E7) with seven amino acids through phage display technology, which has a high specific affinity to bone marrow-derived MSCs. Subsequent analysis suggested that the peptide could efficiently interact specifically with MSCs without any species specificity. Thereafter, E7 was covalently conjugated onto polycaprolactone (PCL) electrospun meshes to construct an "MSC-homing device" for the recruitment of MSCs both in vitro and in vivo. The E7-conjugated PCL electrospun meshes were implanted into a cartilage defect site of rat knee joints, combined with a microfracture procedure to mobilize the endogenous MSCs. After 7 d of implantation, immunofluorescence staining showed that the cells grown into the E7-conjugated PCL electrospun meshes yielded a high positive rate for specific MSC surface markers (CD44, CD90, and CD105) compared with those in arginine-glycine-aspartic acid (RGD)-conjugated PCL electrospun meshes (63.67% vs. 3.03%; 59.37% vs. 2.98%; and 61.45% vs. 3.82%, respectively). Furthermore, the percentage of CD68 positive cells in the E7-conjugated PCL electrospun meshes was much lower than that in the RGD-conjugated PCL electrospun meshes (5.57% vs. 53.43%). This result indicates that E7-conjugated PCL electrospun meshes absorb much less inflammatory cells in vivo than RGD-conjugated PCL electrospun meshes. The results of the present study suggest that the identified E7 peptide sequence has a high specific affinity to MSCs. Covalently conjugating this peptide on the synthetic PCL mesh significantly enhanced the MSC recruitment of PCL in vivo. This method provides a wide range of potential applications in TE. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tautges, Timothy J.
MOAB is a component for representing and evaluating mesh data. MOAB can store stuctured and unstructured mesh, consisting of elements in the finite element "zoo". The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handlesmore » rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms isa powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers induded with MOAB, or as a tanslator between mesh formats, using readers and writers included with MOAB.« less
Grouper: A Compact, Streamable Triangle Mesh Data Structure.
Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek
2013-05-08
We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.
Mapping proteins to disease terminologies: from UniProt to MeSH
Mottaz, Anaïs; Yip, Yum L; Ruch, Patrick; Veuthey, Anne-Lise
2008-01-01
Background Although the UniProt KnowledgeBase is not a medical-oriented database, it contains information on more than 2,000 human proteins involved in pathologies. However, these annotations are not standardized, which impairs the interoperability between biological and clinical resources. In order to make these data easily accessible to clinical researchers, we have developed a procedure to link diseases described in the UniProtKB/Swiss-Prot entries to the MeSH disease terminology. Results We mapped disease names extracted either from the UniProtKB/Swiss-Prot entry comment lines or from the corresponding OMIM entry to the MeSH. Different methods were assessed on a benchmark set of 200 disease names manually mapped to MeSH terms. The performance of the retained procedure in term of precision and recall was 86% and 64% respectively. Using the same procedure, more than 3,000 disease names in Swiss-Prot were mapped to MeSH with comparable efficiency. Conclusions This study is a first attempt to link proteins in UniProtKB to the medical resources. The indexing we provided will help clinicians and researchers navigate from diseases to genes and from genes to diseases in an efficient way. The mapping is available at: . PMID:18460185
Bayesian calibration of coarse-grained forces: Efficiently addressing transferability
NASA Astrophysics Data System (ADS)
Patrone, Paul N.; Rosch, Thomas W.; Phelan, Frederick R.
2016-04-01
Generating and calibrating forces that are transferable across a range of state-points remains a challenging task in coarse-grained (CG) molecular dynamics. In this work, we present a coarse-graining workflow, inspired by ideas from uncertainty quantification and numerical analysis, to address this problem. The key idea behind our approach is to introduce a Bayesian correction algorithm that uses functional derivatives of CG simulations to rapidly and inexpensively recalibrate initial estimates f0 of forces anchored by standard methods such as force-matching. Taking density-temperature relationships as a running example, we demonstrate that this algorithm, in concert with various interpolation schemes, can be used to efficiently compute physically reasonable force curves on a fine grid of state-points. Importantly, we show that our workflow is robust to several choices available to the modeler, including the interpolation schemes and tools used to construct f0. In a related vein, we also demonstrate that our approach can speed up coarse-graining by reducing the number of atomistic simulations needed as inputs to standard methods for generating CG forces.
Load Balancing Unstructured Adaptive Grids for CFD Problems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid
1996-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.
Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1995-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e., the aspect-ratio AR = delta y/delta x is much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotopic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Coarsening Strategies for Unstructured Multigrid Techniques with Application to Anisotropic Problems
NASA Technical Reports Server (NTRS)
Morano, E.; Mavriplis, D. J.; Venkatakrishnan, V.
1996-01-01
Over the years, multigrid has been demonstrated as an efficient technique for solving inviscid flow problems. However, for viscous flows, convergence rates often degrade. This is generally due to the required use of stretched meshes (i.e. the aspect-ratio AR = (delta)y/(delta)x much less than 1) in order to capture the boundary layer near the body. Usual techniques for generating a sequence of grids that produce proper convergence rates on isotropic meshes are not adequate for stretched meshes. This work focuses on the solution of Laplace's equation, discretized through a Galerkin finite-element formulation on unstructured stretched triangular meshes. A coarsening strategy is proposed and results are discussed.
Lightweight 3.66-meter-diameter conical mesh antenna reflector
NASA Technical Reports Server (NTRS)
Moore, D. M.
1974-01-01
A description is given of a 3.66 m diameter nonfurlable conical mesh antenna incorporating the line source feed principle recently developed. The weight of the mesh reflector and its support structure is 162 N. An area weighted RMS surface deviation of 0.28 mm was obtained. The RF performance measurements show a gain of 48.3 db at 8.448 GHz corresponding to an efficiency of 66%. During the design and development of this antenna, the technology for fabricating the large conical membranes of knitted mesh was developed. As part of this technology a FORTRAN computer program, COMESH, was developed which permits the user to predict the surface accuracy of a stretched conical membrane.
A Generic Mesh Data Structure with Parallel Applications
ERIC Educational Resources Information Center
Cochran, William Kenneth, Jr.
2009-01-01
High performance, massively-parallel multi-physics simulations are built on efficient mesh data structures. Most data structures are designed from the bottom up, focusing on the implementation of linear algebra routines. In this thesis, we explore a top-down approach to design, evaluating the various needs of many aspects of simulation, not just…
Advances in Patch-Based Adaptive Mesh Refinement Scalability
Gunney, Brian T.N.; Anderson, Robert W.
2015-12-18
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
NASA Astrophysics Data System (ADS)
Guo, Wei; Zhang, Qin; Xiao, Haibo; Xu, Jie; Li, Qintao; Pan, Xiaohui; Huang, Zhiyong
2014-09-01
The super-hydrophobic and super-oleophilic properties of various materials have been utilized to separate oil from water. These properties induce both oil penetration and water slide off. This research demonstrates that the mesh with both super-hydrophobic and oleophobic properties, with a water contact angle (WCA) higher than 150° and oil contact angle (OCA) near 140°, can also be used to separate oil from. Oil has a higher probability than water of entering into the interstice of the Cu mesh surface and passing through it due to the capillarity effect, van der Waals attractions and the effects of gravitational pressure. The modified mesh surface can easily adsorb the oil, which then forms a film, due to the very strong adhesion properties of the oil molecules. The oil film then contributes to the water sliding off. These properties can be used to separate oil from water with separation efficiencies reaching 99.3%. Additionally, the separation of an oil/water mixture using sand permeated with oil yielded separation efficiencies exceeding 90%.
Advances in Patch-Based Adaptive Mesh Refinement Scalability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunney, Brian T.N.; Anderson, Robert W.
Patch-based structured adaptive mesh refinement (SAMR) is widely used for high-resolution simu- lations. Combined with modern supercomputers, it could provide simulations of unprecedented size and resolution. A persistent challenge for this com- bination has been managing dynamically adaptive meshes on more and more MPI tasks. The dis- tributed mesh management scheme in SAMRAI has made some progress SAMR scalability, but early al- gorithms still had trouble scaling past the regime of 105 MPI tasks. This work provides two critical SAMR regridding algorithms, which are integrated into that scheme to ensure efficiency of the whole. The clustering algorithm is an extensionmore » of the tile- clustering approach, making it more flexible and efficient in both clustering and parallelism. The partitioner is a new algorithm designed to prevent the network congestion experienced by its prede- cessor. We evaluated performance using weak- and strong-scaling benchmarks designed to be difficult for dynamic adaptivity. Results show good scaling on up to 1.5M cores and 2M MPI tasks. Detailed timing diagnostics suggest scaling would continue well past that.« less
NASA Technical Reports Server (NTRS)
Steinthorsson, E.; Modiano, David; Colella, Phillip
1994-01-01
A methodology for accurate and efficient simulation of unsteady, compressible flows is presented. The cornerstones of the methodology are a special discretization of the Navier-Stokes equations on structured body-fitted grid systems and an efficient solution-adaptive mesh refinement technique for structured grids. The discretization employs an explicit multidimensional upwind scheme for the inviscid fluxes and an implicit treatment of the viscous terms. The mesh refinement technique is based on the AMR algorithm of Berger and Colella. In this approach, cells on each level of refinement are organized into a small number of topologically rectangular blocks, each containing several thousand cells. The small number of blocks leads to small overhead in managing data, while their size and regular topology means that a high degree of optimization can be achieved on computers with vector processors.
Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes
NASA Technical Reports Server (NTRS)
Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak
2004-01-01
High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2006-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Park, Michael A.
2005-01-01
An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.
On Reducing Delay in Mesh-Based P2P Streaming: A Mesh-Push Approach
NASA Astrophysics Data System (ADS)
Liu, Zheng; Xue, Kaiping; Hong, Peilin
The peer-assisted streaming paradigm has been widely employed to distribute live video data on the internet recently. In general, the mesh-based pull approach is more robust and efficient than the tree-based push approach. However, pull protocol brings about longer streaming delay, which is caused by the handshaking process of advertising buffer map message, sending request message and scheduling of the data block. In this paper, we propose a new approach, mesh-push, to address this issue. Different from the traditional pull approach, mesh-push implements block scheduling algorithm at sender side, where the block transmission is initiated by the sender rather than by the receiver. We first formulate the optimal upload bandwidth utilization problem, then present the mesh-push approach, in which a token protocol is designed to avoid block redundancy; a min-cost flow model is employed to derive the optimal scheduling for the push peer; and a push peer selection algorithm is introduced to reduce control overhead. Finally, we evaluate mesh-push through simulation, the results of which show mesh-push outperforms the pull scheduling in streaming delay, and achieves comparable delivery ratio at the same time.
A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid
2004-01-01
Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.
Zhou, Weixin; Chen, Jun; Li, Yi; Wang, Danbei; Chen, Jianyu; Feng, Xiaomiao; Huang, Zhendong; Liu, Ruiqing; Lin, Xiujing; Zhang, Hongmei; Mi, Baoxiu; Ma, Yanwen
2016-05-04
Metal mesh is a significant candidate of flexible transparent electrodes to substitute the current state-of-the-art material indium tin oxide (ITO) for future flexible electronics. However, there remains a challenge to fabricate metal mesh with order patterns by a bottom-up approach. In this work, high-quality Cu mesh transparent electrodes with ordered pore arrays are prepared by using breath-figure polymer films as template. The optimal Cu mesh films present a sheet resistance of 28.7 Ω·sq(-1) at a transparency of 83.5%. The work function of Cu mesh electrode is tuned from 4.6 to 5.1 eV by Ag deposition and the following short-time UV-ozone treatment, matching well with the PSS (5.2 eV) hole extraction layer. The modified Cu mesh electrodes show remarkable potential as a substitute of ITO/PET in the flexible OPV and OLED devices. The OPV cells constructed on our Cu mesh electrodes present a similar power conversion efficiency of 2.04% as those on ITO/PET electrodes. The flexible OLED prototype devices can achieve a brightness of 10 000 cd at an operation voltage of 8 V.
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian
2017-01-01
Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.
An electrostatic Particle-In-Cell code on multi-block structured meshes
NASA Astrophysics Data System (ADS)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David
2017-12-01
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.
An electrostatic Particle-In-Cell code on multi-block structured meshes
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...
2017-09-14
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
An electrostatic Particle-In-Cell code on multi-block structured meshes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca
We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less
DISCO: A 3D Moving-mesh Magnetohydrodynamics Code Designed for the Study of Astrophysical Disks
NASA Astrophysics Data System (ADS)
Duffell, Paul C.
2016-09-01
This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide variety of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.
Hot water-repellent and mechanically durable superhydrophobic mesh for oil/water separation.
Cao, Min; Luo, Xiaomin; Ren, Huijun; Feng, Jianyan
2018-02-15
The leakage of oil or organic pollutants into the ocean arouses a global catastrophe. The superhydrophobic materials have offered a new idea for the efficient, thorough and automated oil/water separation. However, most of such materials lose superhydrophobicity when exposed to hot water (e.g. >55 °C). In this study, a hot water-repellent superhydrophobic mesh used for oil/water separation was prepared with one-step spray of modified polyurethane and hydrophobic silica nanoparticles on the copper mesh. The as-prepared superhydrophobic mesh could be applied as the effective materials for the separation of oil/water mixture with a temperature up to 100 °C. In addition, the obtained mesh could selectively remove a wide range of organic solvents from water with high absorption capacity and good recyclability. Moreover, the as-prepared superhydrophobic mesh shows excellent mechanical durability, which makes it a promising material for practical oil/water separation. Copyright © 2017 Elsevier Inc. All rights reserved.
Single fiber model of particle retention in an acoustically driven porous mesh.
Grossner, Michael T; Penrod, Alan E; Belovich, Joanne M; Feke, Donald L
2003-03-01
A method for the capture of small particles (tens of microns in diameter) from a continuously flowing suspension has recently been reported. This technique relies on a standing acoustic wave resonating in a rectangular chamber filled with a high-porosity mesh. Particles are retained in this chamber via a complex interaction between the acoustic field and the porous mesh. Although the mesh has a pore size two orders of magnitude larger than the particle diameter, collection efficiencies of 90% have been measured. A mathematical model has been developed to understand the experimentally observed phenomena and to be able to predict filtration performance. By examining a small region (a single fiber) of the porous mesh, the model has duplicated several experimental events such as the focusing of particles near an element of the mesh and the levitation of particles within pores. The single-fiber analysis forms the basis of modeling the overall performance of the particle filtration system. Copyright 2002 Elsevier Science B.V.
DISCO: A 3D MOVING-MESH MAGNETOHYDRODYNAMICS CODE DESIGNED FOR THE STUDY OF ASTROPHYSICAL DISKS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duffell, Paul C., E-mail: duffell@berkeley.edu
2016-09-01
This work presents the publicly available moving-mesh magnetohydrodynamics (MHD) code DISCO. DISCO is efficient and accurate at evolving orbital fluid motion in two and three dimensions, especially at high Mach numbers. DISCO employs a moving-mesh approach utilizing a dynamic cylindrical mesh that can shear azimuthally to follow the orbital motion of the gas. The moving mesh removes diffusive advection errors and allows for longer time-steps than a static grid. MHD is implemented in DISCO using an HLLD Riemann solver and a novel constrained transport (CT) scheme that is compatible with the mesh motion. DISCO is tested against a wide varietymore » of problems, which are designed to test its stability, accuracy, and scalability. In addition, several MHD tests are performed which demonstrate the accuracy and stability of the new CT approach, including two tests of the magneto-rotational instability, one testing the linear growth rate and the other following the instability into the fully turbulent regime.« less
Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN
NASA Astrophysics Data System (ADS)
Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri
2017-03-01
Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.
Park, Sung-Yun; Cho, Jihyun; Lee, Kyuseok; Yoon, Euisik
2015-12-01
We report a pulse width modulation (PWM) buck converter that is able to achieve a power conversion efficiency (PCE) of > 80% in light loads 100 μA) for implantable biomedical systems. In order to achieve a high PCE for the given light loads, the buck converter adaptively reconfigures the size of power PMOS and NMOS transistors and their gate drivers in accordance with load currents, while operating at a fixed frequency of 1 MHz. The buck converter employs the analog-digital hybrid control scheme for coarse/fine adjustment of power transistors. The coarse digital control generates an approximate duty cycle necessary for driving a given load and selects an appropriate width of power transistors to minimize redundant power dissipation. The fine analog control provides the final tuning of the duty cycle to compensate for the error from the coarse digital control. The mode switching between the analog and digital controls is accomplished by a mode arbiter which estimates the average of duty cycles for the given load condition from limit cycle oscillations (LCO) induced by coarse adjustment. The fabricated buck converter achieved a peak efficiency of 86.3% at 1.4 mA and > 80% efficiency for a wide range of load conditions from 45 μA to 4.1 mA, while generating 1 V output from 2.5-3.3 V supply. The converter occupies 0.375 mm(2) in 0.18 μm CMOS processes and requires two external components: 1.2 μF capacitor and 6.8 μH inductor.
NASA Astrophysics Data System (ADS)
Gassmöller, Rene; Bangerth, Wolfgang
2016-04-01
Particle-in-cell methods have a long history and many applications in geodynamic modelling of mantle convection, lithospheric deformation and crustal dynamics. They are primarily used to track material information, the strain a material has undergone, the pressure-temperature history a certain material region has experienced, or the amount of volatiles or partial melt present in a region. However, their efficient parallel implementation - in particular combined with adaptive finite-element meshes - is complicated due to the complex communication patterns and frequent reassignment of particles to cells. Consequently, many current scientific software packages accomplish this efficient implementation by specifically designing particle methods for a single purpose, like the advection of scalar material properties that do not evolve over time (e.g., for chemical heterogeneities). Design choices for particle integration, data storage, and parallel communication are then optimized for this single purpose, making the code relatively rigid to changing requirements. Here, we present the implementation of a flexible, scalable and efficient particle-in-cell method for massively parallel finite-element codes with adaptively changing meshes. Using a modular plugin structure, we allow maximum flexibility of the generation of particles, the carried tracer properties, the advection and output algorithms, and the projection of properties to the finite-element mesh. We present scaling tests ranging up to tens of thousands of cores and tens of billions of particles. Additionally, we discuss efficient load-balancing strategies for particles in adaptive meshes with their strengths and weaknesses, local particle-transfer between parallel subdomains utilizing existing communication patterns from the finite element mesh, and the use of established parallel output algorithms like the HDF5 library. Finally, we show some relevant particle application cases, compare our implementation to a modern advection-field approach, and demonstrate under which conditions which method is more efficient. We implemented the presented methods in ASPECT (aspect.dealii.org), a freely available open-source community code for geodynamic simulations. The structure of the particle code is highly modular, and segregated from the PDE solver, and can thus be easily transferred to other programs, or adapted for various application cases.
NASA Astrophysics Data System (ADS)
Kung, Chun Haow; Zahiri, Beniamin; Sow, Pradeep Kumar; Mérida, Walter
2018-06-01
A copper mesh with dendritic copper-oxide core-shell structure is prepared using an additive-free electrochemical deposition strategy for on-demand oil-water separation. Electrochemical manipulation of the oxidation state of the copper oxide shell phase results in opposite affinities towards water and oil. The copper mesh can be tuned to manifest both superhydrophobic and superoleophilic properties to enable oil-removal. Conversely, switching to superhydrophilic and underwater superoleophobic allows water-removal. These changes correspond to the application of small reduction voltages (<1.5 V) and subsequent air drying. In the oil-removal mode, heavy oil selectively passes through the mesh while water is retained; in water-removal mode, the mesh allows water to permeate but blocks light oil. The smart membrane achieved separation efficiencies higher than 98% for a series of oil-water mixtures. The separation efficiency remains high with less than 5% variation after 30 cycles of oil-water separation in both modes. The switchable wetting mechanism is demonstrated with the aid of microstructural and electrochemical analysis and based on the well-known Cassie-Baxter and Wenzel theories. The selective removal of water or oil from the oil-water mixtures is driven solely by gravity and yields high efficiency and recyclability. The potential applications for the relevant technologies include oil spills cleanup, fuel purification, and wastewater treatment.
Lai, Canhai; Xu, Zhijie; Li, Tingwen; ...
2017-08-05
In virtual design and scale up of pilot-scale carbon capture systems, the coupled reactive multiphase flow problem must be solved to predict the adsorber's performance and capture efficiency under various operation conditions. This paper focuses on the detailed computational fluid dynamics (CFD) modeling of a pilot-scale fluidized bed adsorber equipped with vertical cooling tubes. Multiphase Flow with Interphase eXchanges (MFiX), an open-source multiphase flow CFD solver, is used for the simulations with custom code to simulate the chemical reactions and filtered sub-grid models to capture the effect of the unresolved details in the coarser mesh for simulations with reasonable accuracymore » and manageable computational effort. Previously developed filtered models for horizontal cylinder drag, heat transfer, and reaction kinetics have been modified to derive the 2D filtered models representing vertical cylinders in the coarse-grid CFD simulations. The effects of the heat exchanger configurations (i.e., horizontal or vertical tubes) on the adsorber's hydrodynamics and CO 2 capture performance are then examined. A one-dimensional three-region process model is briefly introduced for comparison purpose. The CFD model matches reasonably well with the process model while provides additional information about the flow field that is not available with the process model.« less
A manual for PARTI runtime primitives
NASA Technical Reports Server (NTRS)
Berryman, Harry; Saltz, Joel
1990-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
A third-order gas-kinetic CPR method for the Euler and Navier-Stokes equations on triangular meshes
NASA Astrophysics Data System (ADS)
Zhang, Chao; Li, Qibing; Fu, Song; Wang, Z. J.
2018-06-01
A third-order accurate gas-kinetic scheme based on the correction procedure via reconstruction (CPR) framework is developed for the Euler and Navier-Stokes equations on triangular meshes. The scheme combines the accuracy and efficiency of the CPR formulation with the multidimensional characteristics and robustness of the gas-kinetic flux solver. Comparing with high-order finite volume gas-kinetic methods, the current scheme is more compact and efficient by avoiding wide stencils on unstructured meshes. Unlike the traditional CPR method where the inviscid and viscous terms are treated differently, the inviscid and viscous fluxes in the current scheme are coupled and computed uniformly through the kinetic evolution model. In addition, the present scheme adopts a fully coupled spatial and temporal gas distribution function for the flux evaluation, achieving high-order accuracy in both space and time within a single step. Numerical tests with a wide range of flow problems, from nearly incompressible to supersonic flows with strong shocks, for both inviscid and viscous problems, demonstrate the high accuracy and efficiency of the present scheme.
Nelson, S.M.; Andersen, D.C.
2007-01-01
We used coarse-mesh and fine-mesh leafpacks to examine the importance of aquatic macroinvertebrates in the breakdown of floodplain tree leaf litter that seasonally entered a sand-bedded reach of the sixth-order Yampa River in semiarid Colorado. Leafpacks were positioned off the easily mobilized channel bed, mimicking litter trapped in debris piles. Organic matter (OM) loss was fastest for leaves collected from the floodplain and placed in the river in spring (k = 0.029/day) and slowest for leaves collected and placed in the river in winter (0.006/day). Macroinvertebrates were most abundant in winter and spring leaves, but seemed important to processing only in spring, when exclusion by fine mesh reduced OM loss by 25% and nitrogen loss by 65% in spring leaves. Macroinvertebrates seemed to have little role in processing of autumn, winter, or summer leaves over the 50-day to 104-day monitoring periods. Desiccation during bouts of low discharge and sediment deposition on leaves limited invertebrate processing in summer and autumn, whereas processing of winter leaves, which supported relatively large numbers of shredders, might have been restricted by ice formation and low water temperatures. These results were consistent with the concept that microbial processing dominates in higher-order rivers, but suggested that macroinvertebrate processing can be locally important in higher-order desert rivers in seasons or years with favorable discharge and water quality conditions.
Subplane-based Control Rod Decusping Techniques for the 2D/1D Method in MPACT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Aaron M; Collins, Benjamin S; Downar, Thomas
2017-01-01
The MPACT transport code is being jointly developed by Oak Ridge National Laboratory and the University of Michigan to serve as the primary neutron transport code for the Virtual Environment for Reactor Applications Core Simulator. MPACT uses the 2D/1D method to solve the transport equation by decomposing the reactor model into a stack of 2D planes. A fine mesh flux distribution is calculated in each 2D plane using the Method of Characteristics (MOC), then the planes are coupled axially through a 1D NEM-Pmore » $$_3$$ calculation. This iterative calculation is then accelerated using the Coarse Mesh Finite Difference method. One problem that arises frequently when using the 2D/1D method is that of control rod cusping. This occurs when the tip of a control rod falls between the boundaries of an MOC plane, requiring that the rodded and unrodded regions be axially homogenized for the 2D MOC calculations. Performing a volume homogenization does not properly preserve the reaction rates, causing an error known as cusping. The most straightforward way of resolving this problem is by refining the axial mesh, but this can significantly increase the computational expense of the calculation. The other way of resolving the partially inserted rod is through the use of a decusping method. This paper presents new decusping methods implemented in MPACT that can dynamically correct the rod cusping behavior for a variety of problems.« less
Summary of the Third AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Brodersen, Olaf P.; Eisfeld, Bernhard; Wahls, Richard A.; Morrison, Joseph H.; Zickuhr, Tom; Laflin, Kelly R.; Mavriplis, DImitri J.
2007-01-01
The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-al;one configurations of that are representative of transonic transport aircraft. The baseline DLR-F6 wing-body geometry, previously utilized in DPW-II, is also augmented with a side-body fairing to help reduce the complexity of the flow physics in the wing-body juncture region. In addition, two new wing-alone geometries have been developed for the DPW-II. Numerical calculations are performed using industry-relevant test cases that include lift-specific and fixed-alpha flight conditions, as well as full drag polars. Drag, lift, and pitching moment predictions from previous Reynolds-Averaged Navier-Stokes computational fluid Dynamics Methods are presented, focused on fully-turbulent flows. Solutions are performed on structured, unstructured, and hybrid grid systems. The structured grid sets include point-matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, and prismatic elements. Effort was made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body families are comprised of a coarse, medium, and fine grid, while the wing-alone families also include an extra-fine mesh. These mesh sequences are utilized to help determine how the provided flow solutions fair with respect to asymptotic grid convergence, and are used to estimate an absolute drag of each configuration.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load imbalance among processors on a parallel machine. This paper describes the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution cost is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35% of the mesh is randomly adapted. For large-scale scientific computations, our load balancing strategy gives almost a sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remapper yields processor assignments that are less than 3% off the optimal solutions but requires only 1% of the computational time.
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Feasibility of Using Distributed Wireless Mesh Networks for Medical Emergency Response
Braunstein, Brian; Trimble, Troy; Mishra, Rajesh; Manoj, B. S.; Rao, Ramesh; Lenert, Leslie
2006-01-01
Achieving reliable, efficient data communications networks at a disaster site is a difficult task. Network paradigms, such as Wireless Mesh Network (WMN) architectures, form one exemplar for providing high-bandwidth, scalable data communication for medical emergency response activity. WMNs are created by self-organized wireless nodes that use multi-hop wireless relaying for data transfer. In this paper, we describe our experience using a mesh network architecture we developed for homeland security and medical emergency applications. We briefly discuss the architecture and present the traffic behavioral observations made by a client-server medical emergency application tested during a large-scale homeland security drill. We present our traffic measurements, describe lessons learned, and offer functional requirements (based on field testing) for practical 802.11 mesh medical emergency response networks. With certain caveats, the results suggest that 802.11 mesh networks are feasible and scalable systems for field communications in disaster settings. PMID:17238308
Free Mesh Method: fundamental conception, algorithms and accuracy study
YAGAWA, Genki
2011-01-01
The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752
A new procedure for dynamic adaption of three-dimensional unstructured grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
A new procedure is presented for the simultaneous coarsening and refinement of three-dimensional unstructured tetrahedral meshes. This algorithm allows for localized grid adaption that is used to capture aerodynamic flow features such as vortices and shock waves in helicopter flowfield simulations. The mesh-adaption algorithm is implemented in the C programming language and uses a data structure consisting of a series of dynamically-allocated linked lists. These lists allow the mesh connectivity to be rapidly reconstructed when individual mesh points are added and/or deleted. The algorithm allows the mesh to change in an anisotropic manner in order to efficiently resolve directional flow features. The procedure has been successfully implemented on a single processor of a Cray Y-MP computer. Two sample cases are presented involving three-dimensional transonic flow. Computed results show good agreement with conventional structured-grid solutions for the Euler equations.
PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.
NASA Technical Reports Server (NTRS)
Oliker, Leonid
1998-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.
A novel approach in formulation of special transition elements: Mesh interface elements
NASA Technical Reports Server (NTRS)
Sarigul, Nesrin
1991-01-01
The objective of this research program is in the development of more accurate and efficient methods for solution of singular problems encountered in various branches of mechanics. The research program can be categorized under three levels. The first two levels involve the formulation of a new class of elements called 'mesh interface elements' (MIE) to connect meshes of traditional elements either in three dimensions or in three and two dimensions. The finite element formulations are based on boolean sum and blending operators. MEI are being formulated and tested in this research to account for the steep gradients encountered in aircraft and space structure applications. At present, the heat transfer and structural analysis problems are being formulated from uncoupled theory point of view. The status report: (1) summarizes formulation for heat transfer and structural analysis; (2) explains formulation of MEI; (3) examines computational efficiency; and (4) shows verification examples.
An efficient miniature 120 Hz pulse tube cryocooler using high porosity regenerator material
NASA Astrophysics Data System (ADS)
Yu, Huiqin; Wu, Yinong; Ding, Lei; Jiang, Zhenhua; Liu, Shaoshuai
2017-12-01
A 1.22 kg coaxial miniature pulse tube cryocooler (MPTC) has been fabricated and tested in our laboratory to provide cooling for cryogenic applications demanding compactness, low mass and rapid cooling rate. The geometrical parameters of regenerator, pulse tube and phase shifter are optimized. The investigation demonstrates that using higher mesh number and thinner wire diameter of stainless steel screen (SSS) can promote the coefficient of performance (COP) when the MPTC operates at 120 Hz. In this study, the 604 mesh SSS with 17 μm diameter of mesh wire is constructed as filler of regenerator. The experimental results show the MPTC operating at 120 Hz achieves a no-load temperature of 53.5 K with 3.8 MPa charging pressure, and gets a cooling power of 2 W at 80 K with 55 W input electric power which has a relative Carnot efficiency of 9.68%.
PLUM: Parallel Load Balancing for Adaptive Unstructured Meshes
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Saini, Subhash (Technical Monitor)
1998-01-01
Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. We present a novel method called PLUM to dynamically balance the processor workloads with a global view. This paper presents the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. A data redistribution model is also presented that predicts the remapping cost on the SP2. This model is required to determine whether the gain from a balanced workload distribution offsets the cost of data movement. Results presented in this paper demonstrate that PLUM is an effective dynamic load balancing strategy which remains viable on a large number of processors.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. A detailed description of the enrichment and coarsening procedures are presented and comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
Spatial adaptation procedures on tetrahedral meshes for unsteady aerodynamic flow calculations
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1993-01-01
Spatial adaptation procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaptation procedures were developed and implemented within a three-dimensional, unstructured-grid, upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with experimental data for an ONERA M6 wing and an exact solution for a shock-tube problem to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady results, obtained using spatial adaptation procedures, are shown to be of high spatial accuracy, primarily in that discontinuities such as shock waves are captured very sharply.
The Effectiveness of Shrouding on Reducing Meshed Spur Gear Power Loss - Test Results
NASA Technical Reports Server (NTRS)
Delgado, I. R.; Hurrell, M. J.
2017-01-01
Gearbox efficiency is reduced at high rotational speeds due to windage drag and viscous effects on rotating, meshed gear components. A goal of NASA aeronautics rotorcraft research is aimed at propulsion technologies that improve efficiency while minimizing vehicle weight. Specifically, reducing power losses to rotorcraft gearboxes would allow gains in areas such as vehicle payload, range, mission type, and fuel consumption. To that end, a gear windage rig has been commissioned at NASA Glenn Research Center to measure windage drag on gears and to test methodologies to mitigate windage power losses. One method used in rotorcraft gearbox design attempts to reduce gear windage power loss by utilizing close clearance walls to enclose the gears in both the axial and radial directions. The close clearance shrouds result in reduced drag on the gear teeth, and reduced power loss. For meshed spur gears, the shrouding takes the form of metal side plates and circumferential metal sectors. Variably positioned axial and radial shrouds are incorporated in the NASA rig to study the effect of shroud clearance on gearbox power loss. A number of researchers have given experimental and analytical results for single spur gears, with and without shrouding. Shrouded meshed spur gear test results are sparse in the literature. Windage tests were run at NASA Glenn using meshed spur gears at four shroud configurations: unshrouded, shrouded (max. axial, max radial), and two intermediate shrouding conditions. Results are compared to available meshed spur gear power loss data analyses as well as single spur gear data/analyses. Recommendations are made for future work.
The Effectiveness of Shrouding on Reducing Meshed Spur Gear Power Loss Test Results
NASA Technical Reports Server (NTRS)
Delgado, I. R.; Hurrell, M. J.
2017-01-01
Gearbox efficiency is reduced at high rotational speeds due to windage drag and viscous effects on rotating, meshed gear components. A goal of NASA aeronautics rotorcraft research is aimed at propulsion technologies that improve efficiency while minimizing vehicle weight. Specifically, reducing power losses to rotorcraft gearboxes would allow gains in areas such as vehicle payload, range, mission type, and fuel consumption. To that end, a gear windage rig has been commissioned at NASA Glenn Research Center to measure windage drag on gears and to test methodologies to mitigate windage power losses. One method used in rotorcraft gearbox design attempts to reduce gear windage power loss by utilizing close clearance walls to enclose the gears in both the axial and radial directions. The close clearance shrouds result in reduced drag on the gear teeth and reduced power loss. For meshed spur gears, the shrouding takes the form of metal side plates and circumferential metal sectors. Variably positioned axial and radial shrouds are incorporated in the NASA rig to study the effect of shroud clearance on gearbox power loss. A number of researchers have given experimental and analytical results for single spur gears, with and without shrouding. Shrouded meshed spur gear test results are sparse in the literature. Windage tests were run at NASA Glenn using meshed spur gears at four shroud configurations: unshrouded, shrouded (max. axial, max. radial), and two intermediate shrouding conditions. Results are compared to available meshed spur gear power loss data analyses as well as single spur gear data analyses.
Grouper: A Compact, Streamable Triangle Mesh Data Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luffel, Mark; Gurung, Topraj; Lindstrom, Peter
2014-01-01
Here, we present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We also present a linear-time construction algorithm that allows streaming out Grouper meshesmore » using a small memory footprint while preserving the initial ordering of vertices. In this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle-i.e., less than the three vertex references stored with each triangle in a conventional indexed mesh format-Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access. We demonstrate the versatility and performance benefits of Grouper using a suite of example meshes and processing kernels.« less
Full-Carpet Design of a Low-Boom Demonstrator Concept
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Wintzer, Mathias; Rallabhandi, Sriram K.
2015-01-01
The Cart3D adjoint-based design framework is used to mitigate the undesirable o -track sonic boom properties of a demonstrator concept designed for low-boom directly under the flight path. First, the requirements of a Cart3D design mesh are determined using a high-fidelity mesh adapted to minimize the discretization error of the CFD analysis. Low-boom equivalent area targets are then generated at the under-track and one off-track azimuthal position for the baseline configuration. The under-track target is generated using a trim- feasible low-boom target generation process, ensuring that the final design is not only low-boom, but also trimmed at the specified flight condition. The o -track equivalent area target is generated by minimizing the A-weighted loudness using an efficient adjoint-based approach. The configuration outer mold line is then parameterized and optimized to match the off-body pressure distributions prescribed by the low-boom targets. The numerical optimizer uses design gradients which are calculated using the Cart3D adjoint- based design capability. Optimization constraints are placed on the geometry to satisfy structural feasibility. The low-boom properties of the final design are verified using the adaptive meshing approach. This analysis quantifies the error associated with the CFD mesh that is used for design. Finally, an alternate mesh construction and target positioning approach offering greater computational efficiency is demonstrated and verified.
NASA Astrophysics Data System (ADS)
Søe-Knudsen, Alf; Sorokin, Sergey
2011-06-01
This rapid communication is concerned with justification of the 'rule of thumb', which is well known to the community of users of the finite element (FE) method in dynamics, for the accuracy assessment of the wave finite element (WFE) method. An explicit formula linking the size of a window in the dispersion diagram, where the WFE method is trustworthy, with the coarseness of a FE mesh employed is derived. It is obtained by the comparison of the exact Pochhammer-Chree solution for an elastic rod having the circular cross-section with its WFE approximations. It is shown that the WFE power flow predictions are also valid within this window.
NASA Technical Reports Server (NTRS)
Cohen, C.
1981-01-01
A hierarchy of experiments was run, starting with an all water planet with zonally symmetric sea surface temperatures, then adding, one at a time, flat continents, mountains, surface physics, and realistic sea surface temperatures. The model was run with the sun fixed at a perpetual January. Ensemble means and standard deviations were computed and the t-test was used to determine the statistical significance of the results. The addition of realistic surface physics does not affect the model climatology to as large as extent as does the addition of mountains. Departures from zonal symmetry of the SST field result in a better simulation of the real atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebay, S.
This work is devoted to the description of an efficient unstructured mesh generation method entirely based on the Delaunay triangulation. The distinctive characteristic of the proposed method is that point positions and connections are computed simultaneously. This result is achieved by taking advantage of the sequential way in which the Bowyer-Watson algorithm computes the Delaunay triangulation. Two methods are proposed which have great geometrical flexibility, in that they allow us to treat domains of arbitrary shape and topology and to generate arbitrarily nonuniform meshes. The methods are computationally efficient and are applicable both in two and three dimensions. 11 refs.,more » 20 figs., 1 tab.« less
Efficient evaluation of wireless real-time control networks.
Horvath, Peter; Yampolskiy, Mark; Koutsoukos, Xenofon
2015-02-11
In this paper, we present a system simulation framework for the design and performance evaluation of complex wireless cyber-physical systems. We describe the simulator architecture and the specific developments that are required to simulate cyber-physical systems relying on multi-channel, multihop mesh networks. We introduce realistic and efficient physical layer models and a system simulation methodology, which provides statistically significant performance evaluation results with low computational complexity. The capabilities of the proposed framework are illustrated in the example of WirelessHART, a centralized, real-time, multi-hop mesh network designed for industrial control and monitor applications.
Adaptive mesh strategies for the spectral element method
NASA Technical Reports Server (NTRS)
Mavriplis, Catherine
1992-01-01
An adaptive spectral method was developed for the efficient solution of time dependent partial differential equations. Adaptive mesh strategies that include resolution refinement and coarsening by three different methods are illustrated on solutions to the 1-D viscous Burger equation and the 2-D Navier-Stokes equations for driven flow in a cavity. Sharp gradients, singularities, and regions of poor resolution are resolved optimally as they develop in time using error estimators which indicate the choice of refinement to be used. The adaptive formulation presents significant increases in efficiency, flexibility, and general capabilities for high order spectral methods.
Rib fractures under anterior-posterior dynamic loads: experimental and finite-element study.
Li, Zuoping; Kindig, Matthew W; Kerrigan, Jason R; Untaroiu, Costin D; Subit, Damien; Crandall, Jeff R; Kent, Richard W
2010-01-19
The purpose of this study was to investigate whether using a finite-element (FE) mesh composed entirely of hexahedral elements to model cortical and trabecular bone (all-hex model) would provide more accurate simulations than those with variable thickness shell elements for cortical bone and hexahedral elements for trabecular bone (hex-shell model) in the modeling human ribs. First, quasi-static non-injurious and dynamic injurious experiments were performed using the second, fourth, and tenth human thoracic ribs to record the structural behavior and fracture tolerance of individual ribs under anterior-posterior bending loads. Then, all-hex and hex-shell FE models for the three ribs were developed using an octree-based and multi-block hex meshing approach, respectively. Material properties of cortical bone were optimized using dynamic experimental data and the hex-shell model of the fourth rib and trabecular bone properties were taken from the literature. Overall, the reaction force-displacement relationship predicted by both all-hex and hex-shell models with nodes in the offset middle-cortical surfaces compared well with those measured experimentally for all the three ribs. With the exception of fracture locations, the predictions from all-hex and offset hex-shell models of the second and fourth ribs agreed better with experimental data than those from the tenth rib models in terms of reaction force at fracture (difference <15.4%), ultimate failure displacement and time (difference <7.3%), and cortical bone strains. The hex-shell models with shell nodes in outer cortical surfaces increased static reaction forces up to 16.6%, compared to offset hex-shell models. These results indicated that both all-hex and hex-shell modeling strategies were applicable for simulating rib responses and bone fractures for the loading conditions considered, but coarse hex-shell models with constant or variable shell thickness were more computationally efficient and therefore preferred. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sanyal, Tanmoy; Shell, M. Scott
2016-07-01
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one at which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.
NASA Technical Reports Server (NTRS)
Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.
2014-01-01
Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.
NASA Technical Reports Server (NTRS)
Ashford, Gregory A.; Powell, Kenneth G.
1995-01-01
A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.
Chen, Xiaolian; Guo, Wenrui; Xie, Liming; Wei, Changting; Zhuang, Jinyong; Su, Wenming; Cui, Zheng
2017-10-25
Metal-mesh is one of the contenders to replace indium tin oxide (ITO) as transparent conductive electrodes (TCEs) for optoelectronic applications. However, considerable surface roughness accompanying metal-mesh type of transparent electrodes has been the root cause of electrical short-circuiting for optoelectronic devices, such as organic light-emitting diode (OLED) and organic photovoltaic (OPV). In this work, a novel approach to making metal-mesh TCE has been proposed that is based on hybrid printing of silver (Ag) nanoparticle ink and electroplating of nickel (Ni). By polishing back the electroplated Ni, an extremely smooth surface was achieved. The fabricated Ag/Ni metal-mesh TCE has a surface roughness of 0.17 nm, a low sheet resistance of 2.1 Ω/□, and a high transmittance of 88.6%. The figure of merit is 1450, which is 30 times better than ITO. In addition, the Ag/Ni metal-mesh TCE shows outstanding mechanical flexibility and environmental stability at high temperature and humidity. Using the polished Ag/Ni metal-mesh TCE, a flexible quantum dot light-emitting diode (QLED) was fabricated with an efficiency of 10.4 cd/A and 3.2 lm/W at 1000 cd/m 2 .
NASA Technical Reports Server (NTRS)
Steger, J. L.; Rizk, Y. M.
1985-01-01
An efficient numerical mesh generation scheme capable of creating orthogonal or nearly orthogonal grids about moderately complex three dimensional configurations is described. The mesh is obtained by marching outward from a user specified grid on the body surface. Using spherical grid topology, grids have been generated about full span rectangular wings and a simplified space shuttle orbiter.
A manual for PARTI runtime primitives, revision 1
NASA Technical Reports Server (NTRS)
Das, Raja; Saltz, Joel; Berryman, Harry
1991-01-01
Primitives are presented that are designed to help users efficiently program irregular problems (e.g., unstructured mesh sweeps, sparse matrix codes, adaptive mesh partial differential equations solvers) on distributed memory machines. These primitives are also designed for use in compilers for distributed memory multiprocessors. Communications patterns are captured at runtime, and the appropriate send and receive messages are automatically generated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hewett, D.W.; Yu-Jiuan Chen
The authors describe how they hold onto orthogonal mesh discretization when dealing with curved boundaries. Special difference operators were constructed to approximate numerical zones split by the domain boundary; the operators are particularly simple for this rectangular mesh. The authors demonstrated that this simple numerical approach, termed Dynamic Alternating Direction Implicit, turned out to be considerably more efficient than more complex grid-adaptive algorithms that were tried previously.
Binary mesh partitioning for cache-efficient visualization.
Tchiboukdjian, Marc; Danjean, Vincent; Raffin, Bruno
2010-01-01
One important bottleneck when visualizing large data sets is the data transfer between processor and memory. Cache-aware (CA) and cache-oblivious (CO) algorithms take into consideration the memory hierarchy to design cache efficient algorithms. CO approaches have the advantage to adapt to unknown and varying memory hierarchies. Recent CA and CO algorithms developed for 3D mesh layouts significantly improve performance of previous approaches, but they lack of theoretical performance guarantees. We present in this paper a {\\schmi O}(N\\log N) algorithm to compute a CO layout for unstructured but well shaped meshes. We prove that a coherent traversal of a N-size mesh in dimension d induces less than N/B+{\\schmi O}(N/M;{1/d}) cache-misses where B and M are the block size and the cache size, respectively. Experiments show that our layout computation is faster and significantly less memory consuming than the best known CO algorithm. Performance is comparable to this algorithm for classical visualization algorithm access patterns, or better when the BSP tree produced while computing the layout is used as an acceleration data structure adjusted to the layout. We also show that cache oblivious approaches lead to significant performance increases on recent GPU architectures.
Laser Ray Tracing in a Parallel Arbitrary Lagrangian-Eulerian Adaptive Mesh Refinement Hydrocode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masters, N D; Kaiser, T B; Anderson, R W
2009-09-28
ALE-AMR is a new hydrocode that we are developing as a predictive modeling tool for debris and shrapnel formation in high-energy laser experiments. In this paper we present our approach to implementing laser ray-tracing in ALE-AMR. We present the equations of laser ray tracing, our approach to efficient traversal of the adaptive mesh hierarchy in which we propagate computational rays through a virtual composite mesh consisting of the finest resolution representation of the modeled space, and anticipate simulations that will be compared to experiments for code validation.
Turbine component cooling channel mesh with intersection chambers
Lee, Ching-Pang; Marra, John J
2014-05-06
A mesh (35) of cooling channels (35A, 35B) with an array of cooling channel intersections (42) in a wall (21, 22) of a turbine component. A mixing chamber (42A-C) at each intersection is wider (W1, W2)) than a width (W) of each of the cooling channels connected to the mixing chamber. The mixing chamber promotes swirl, and slows the coolant for more efficient and uniform cooling. A series of cooling meshes (M1, M2) may be separated by mixing manifolds (44), which may have film cooling holes (46) and/or coolant refresher holes (48).
NASA Astrophysics Data System (ADS)
Gill, Stuart P. D.; Knebe, Alexander; Gibson, Brad K.; Flynn, Chris; Ibata, Rodrigo A.; Lewis, Geraint F.
2003-04-01
An adaptive multi grid approach to simulating the formation of structure from collisionless dark matter is described. MLAPM (Multi-Level Adaptive Particle Mesh) is one of the most efficient serial codes available on the cosmological "market" today. As part of Swinburne University's role in the development of the Square Kilometer Array, we are implementing hydrodynamics, feedback, and radiative transfer within the MLAPM adaptive mesh, in order to simulate baryonic processes relevant to the interstellar and intergalactic media at high redshift. We will outline our progress to date in applying the existing MLAPM to a study of the decay of satellite galaxies within massive host potentials.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
Unstructured Euler flow solutions using hexahedral cell refinement
NASA Technical Reports Server (NTRS)
Melton, John E.; Cappuccio, Gelsomina; Thomas, Scott D.
1991-01-01
An attempt is made to extend grid refinement into three dimensions by using unstructured hexahedral grids. The flow solver is developed using the TIGER (topologically Independent Grid, Euler Refinement) as the starting point. The program uses an unstructured hexahedral mesh and a modified version of the Jameson four-stage, finite-volume Runge-Kutta algorithm for integration of the Euler equations. The unstructured mesh allows for local refinement appropriate for each freestream condition, thereby concentrating mesh cells in the regions of greatest interest. This increases the computational efficiency because the refinement is not required to extend throughout the entire flow field.
Gondal, Mohammed A; Sadullah, Muhammad S; Qahtan, Talal F; Dastageer, Mohamed A; Baig, Umair; McKinley, Gareth H
2017-05-10
Superhydrophilic and underwater superoleophobic surfaces were fabricated by facile spray coating of nanostructured WO 3 on stainless steel meshes and compared its performance in oil-water separation with ZnO coated meshes. The gravity driven oil-water separation system was designed using these surfaces as the separation media and it was noticed that WO 3 coated stainless steel mesh showed high separation efficiency (99%), with pore size as high as 150 µm, whereas ZnO coated surfaces failed in the process of oil-water separation when the pore exceeded 50 µm size. Since, nanostructured WO 3 is a well known catalyst, the simultaneous photocatalytic degradation of organic pollutants present in the separated water from the oil water separation process were tested using WO 3 coated surfaces under UV radiation and the efficiency of this degradation was found to be quite significant. These results assure that with little improvisation on the oil water separation system, these surfaces can be made multifunctional to work simultaneously for oil-water separation and demineralization of organic pollutants from the separated water. Fabrication of the separating surface, their morphological characteristics, wettability, oil water separation efficiency and photo-catalytic degradation efficiency are enunciated.
Efficient morse decompositions of vector fields.
Chen, Guoning; Mischaikow, Konstantin; Laramee, Robert S; Zhang, Eugene
2008-01-01
Existing topology-based vector field analysis techniques rely on the ability to extract the individual trajectories such as fixed points, periodic orbits, and separatrices that are sensitive to noise and errors introduced by simulation and interpolation. This can make such vector field analysis unsuitable for rigorous interpretations. We advocate the use of Morse decompositions, which are robust with respect to perturbations, to encode the topological structures of a vector field in the form of a directed graph, called a Morse connection graph (MCG). While an MCG exists for every vector field, it need not be unique. Previous techniques for computing MCG's, while fast, are overly conservative and usually results in MCG's that are too coarse to be useful for the applications. To address this issue, we present a new technique for performing Morse decomposition based on the concept of tau-maps, which typically provides finer MCG's than existing techniques. Furthermore, the choice of tau provides a natural tradeoff between the fineness of the MCG's and the computational costs. We provide efficient implementations of Morse decomposition based on tau-maps, which include the use of forward and backward mapping techniques and an adaptive approach in constructing better approximations of the images of the triangles in the meshes used for simulation.. Furthermore, we propose the use of spatial tau-maps in addition to the original temporal tau-maps. These techniques provide additional trade-offs between the quality of the MCGs and the speed of computation. We demonstrate the utility of our technique with various examples in the plane and on surfaces including engine simulation data sets.
NASA Astrophysics Data System (ADS)
He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry
2008-04-01
The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.
NASA Astrophysics Data System (ADS)
Kifonidis, K.; Müller, E.
2012-08-01
Aims: We describe and study a family of new multigrid iterative solvers for the multidimensional, implicitly discretized equations of hydrodynamics. Schemes of this class are free of the Courant-Friedrichs-Lewy condition. They are intended for simulations in which widely differing wave propagation timescales are present. A preferred solver in this class is identified. Applications to some simple stiff test problems that are governed by the compressible Euler equations, are presented to evaluate the convergence behavior, and the stability properties of this solver. Algorithmic areas are determined where further work is required to make the method sufficiently efficient and robust for future application to difficult astrophysical flow problems. Methods: The basic equations are formulated and discretized on non-orthogonal, structured curvilinear meshes. Roe's approximate Riemann solver and a second-order accurate reconstruction scheme are used for spatial discretization. Implicit Runge-Kutta (ESDIRK) schemes are employed for temporal discretization. The resulting discrete equations are solved with a full-coarsening, non-linear multigrid method. Smoothing is performed with multistage-implicit smoothers. These are applied here to the time-dependent equations by means of dual time stepping. Results: For steady-state problems, our results show that the efficiency of the present approach is comparable to the best implicit solvers for conservative discretizations of the compressible Euler equations that can be found in the literature. The use of red-black as opposed to symmetric Gauss-Seidel iteration in the multistage-smoother is found to have only a minor impact on multigrid convergence. This should enable scalable parallelization without having to seriously compromise the method's algorithmic efficiency. For time-dependent test problems, our results reveal that the multigrid convergence rate degrades with increasing Courant numbers (i.e. time step sizes). Beyond a Courant number of nine thousand, even complete multigrid breakdown is observed. Local Fourier analysis indicates that the degradation of the convergence rate is associated with the coarse-grid correction algorithm. An implicit scheme for the Euler equations that makes use of the present method was, nevertheless, able to outperform a standard explicit scheme on a time-dependent problem with a Courant number of order 1000. Conclusions: For steady-state problems, the described approach enables the construction of parallelizable, efficient, and robust implicit hydrodynamics solvers. The applicability of the method to time-dependent problems is presently restricted to cases with moderately high Courant numbers. This is due to an insufficient coarse-grid correction of the employed multigrid algorithm for large time steps. Further research will be required to help us to understand and overcome the observed multigrid convergence difficulties for time-dependent problems.
Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes
NASA Technical Reports Server (NTRS)
Abgrall, R.
1991-01-01
An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.
Adjoint-Based Mesh Adaptation for the Sonic Boom Signature Loudness
NASA Technical Reports Server (NTRS)
Rallabhandi, Sriram K.; Park, Michael A.
2017-01-01
The mesh adaptation functionality of FUN3D is utilized to obtain a mesh optimized to calculate sonic boom ground signature loudness. During this process, the coupling between the discrete-adjoints of the computational fluid dynamics tool FUN3D and the atmospheric propagation tool sBOOM is exploited to form the error estimate. This new mesh adaptation methodology will allow generation of suitable meshes adapted to reduce the estimated errors in the ground loudness, which is an optimization metric employed in supersonic aircraft design. This new output-based adaptation could allow new insights into meshing for sonic boom analysis and design, and complements existing output-based adaptation techniques such as adaptation to reduce estimated errors in off-body pressure functional. This effort could also have implications for other coupled multidisciplinary adjoint capabilities (e.g., aeroelasticity) as well as inclusion of propagation specific parameters such as prevailing winds or non-standard atmospheric conditions. Results are discussed in the context of existing methods and appropriate conclusions are drawn as to the efficacy and efficiency of the developed capability.
Bessel smoothing filter for spectral-element mesh
NASA Astrophysics Data System (ADS)
Trinh, P. T.; Brossier, R.; Métivier, L.; Virieux, J.; Wellington, P.
2017-06-01
Smoothing filters are extremely important tools in seismic imaging and inversion, such as for traveltime tomography, migration and waveform inversion. For efficiency, and as they can be used a number of times during inversion, it is important that these filters can easily incorporate prior information on the geological structure of the investigated medium, through variable coherent lengths and orientation. In this study, we promote the use of the Bessel filter to achieve these purposes. Instead of considering the direct application of the filter, we demonstrate that we can rely on the equation associated with its inverse filter, which amounts to the solution of an elliptic partial differential equation. This enhances the efficiency of the filter application, and also its flexibility. We apply this strategy within a spectral-element-based elastic full waveform inversion framework. Taking advantage of this formulation, we apply the Bessel filter by solving the associated partial differential equation directly on the spectral-element mesh through the standard weak formulation. This avoids cumbersome projection operators between the spectral-element mesh and a regular Cartesian grid, or expensive explicit windowed convolution on the finite-element mesh, which is often used for applying smoothing operators. The associated linear system is solved efficiently through a parallel conjugate gradient algorithm, in which the matrix vector product is factorized and highly optimized with vectorized computation. Significant scaling behaviour is obtained when comparing this strategy with the explicit convolution method. The theoretical numerical complexity of this approach increases linearly with the coherent length, whereas a sublinear relationship is observed practically. Numerical illustrations are provided here for schematic examples, and for a more realistic elastic full waveform inversion gradient smoothing on the SEAM II benchmark model. These examples illustrate well the efficiency and flexibility of the approach proposed.
3D Tensorial Elastodynamics for Isotropic Media on Vertically Deformed Meshes
NASA Astrophysics Data System (ADS)
Shragge, J. C.
2017-12-01
Solutions of the 3D elastodynamic wave equation are sometimes required in industrial and academic applications of elastic reverse-time migration (E-RTM) and full waveform inversion (E-FWI) that involve vertically deformed meshes. Examples include incorporating irregular free-surface topography and handling internal boundaries (e.g., water bottom) directly into the computational meshes. In 3D E-RTM and E-FWI applications, the number of forward modeling simulations can number in the tens of thousands (per iteration), which necessitates the development of stable, accurate and efficient 3D elastodynamics solvers. For topographic scenarios, most finite-difference solution approaches use a change-of-variable strategy that has a number of associated computational challenges, including difficulties in handling of the free-surface boundary condition. In this study, I follow a tensorial approach and use a generalized family of analytic transforms to develop a set of analytic equations for 3D elastodynamics that directly incorporates vertical grid deformations. Importantly, this analytic approach allows for the specification of an analytic free-surface boundary condition appropriate for vertically deformed meshes. These equations are both straightforward and efficient to solve using a velocity-stress formulation with finite-difference (MFD) operators implemented on a fully staggered grid. Moreover, I demonstrate that the use of mimetic finite difference (MFD) methods allows stable, accurate, and efficient numerical solutions to be simulated for typical topographic scenarios. Examples demonstrate that high-quality elastic wavefields can be generated for topographic surfaces exhibiting significant topographic relief.
Operator induced multigrid algorithms using semirefinement
NASA Technical Reports Server (NTRS)
Decker, Naomi; Vanrosendale, John
1989-01-01
A variant of multigrid, based on zebra relaxation, and a new family of restriction/prolongation operators is described. Using zebra relaxation in combination with an operator-induced prolongation leads to fast convergence, since the coarse grid can correct all error components. The resulting algorithms are not only fast, but are also robust, in the sense that the convergence rate is insensitive to the mesh aspect ratio. This is true even though line relaxation is performed in only one direction. Multigrid becomes a direct method if an operator-induced prolongation is used, together with the induced coarse grid operators. Unfortunately, this approach leads to stencils which double in size on each coarser grid. The use of an implicit three point restriction can be used to factor these large stencils, in order to retain the usual five or nine point stencils, while still achieving fast convergence. This algorithm achieves a V-cycle convergence rate of 0.03 on Poisson's equation, using 1.5 zebra sweeps per level, while the convergence rate improves to 0.003 if optimal nine point stencils are used. Numerical results for two and three dimensional model problems are presented, together with a two level analysis explaining these results.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
Multiscale geometric modeling of macromolecules II: Lagrangian representation
Feng, Xin; Xia, Kelin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2013-01-01
Geometric modeling of biomolecules plays an essential role in the conceptualization of biolmolecular structure, function, dynamics and transport. Qualitatively, geometric modeling offers a basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. Quantitatively, geometric modeling bridges the gap between molecular information, such as that from X-ray, NMR and cryo-EM, and theoretical/mathematical models, such as molecular dynamics, the Poisson-Boltzmann equation and the Nernst-Planck equation. In this work, we present a family of variational multiscale geometric models for macromolecular systems. Our models are able to combine multiresolution geometric modeling with multiscale electrostatic modeling in a unified variational framework. We discuss a suite of techniques for molecular surface generation, molecular surface meshing, molecular volumetric meshing, and the estimation of Hadwiger’s functionals. Emphasis is given to the multiresolution representations of biomolecules and the associated multiscale electrostatic analyses as well as multiresolution curvature characterizations. The resulting fine resolution representations of a biomolecular system enable the detailed analysis of solvent-solute interaction, and ion channel dynamics, while our coarse resolution representations highlight the compatibility of protein-ligand bindings and possibility of protein-protein interactions. PMID:23813599
Jiang, Shuyong; Zhou, Tao; Tu, Jian; Shi, Laixin; Chen, Qiang; Yang, Mingbo
2017-01-01
Numerical modeling of microstructure evolution in various regions during uniaxial compression and canning compression of NiTi shape memory alloy (SMA) are studied through combined macroscopic and microscopic finite element simulation in order to investigate plastic deformation of NiTi SMA at 400 °C. In this approach, the macroscale material behavior is modeled with a relatively coarse finite element mesh, and then the corresponding deformation history in some selected regions in this mesh is extracted by the sub-model technique of finite element code ABAQUS and subsequently used as boundary conditions for the microscale simulation by means of crystal plasticity finite element method (CPFEM). Simulation results show that NiTi SMA exhibits an inhomogeneous plastic deformation at the microscale. Moreover, regions that suffered canning compression sustain more homogeneous plastic deformation by comparison with the corresponding regions subjected to uniaxial compression. The mitigation of inhomogeneous plastic deformation contributes to reducing the statistically stored dislocation (SSD) density in polycrystalline aggregation and also to reducing the difference of stress level in various regions of deformed NiTi SMA sample, and therefore sustaining large plastic deformation in the canning compression process. PMID:29027925
Hu, Li; Jiang, Shuyong; Zhou, Tao; Tu, Jian; Shi, Laixin; Chen, Qiang; Yang, Mingbo
2017-10-13
Numerical modeling of microstructure evolution in various regions during uniaxial compression and canning compression of NiTi shape memory alloy (SMA) are studied through combined macroscopic and microscopic finite element simulation in order to investigate plastic deformation of NiTi SMA at 400 °C. In this approach, the macroscale material behavior is modeled with a relatively coarse finite element mesh, and then the corresponding deformation history in some selected regions in this mesh is extracted by the sub-model technique of finite element code ABAQUS and subsequently used as boundary conditions for the microscale simulation by means of crystal plasticity finite element method (CPFEM). Simulation results show that NiTi SMA exhibits an inhomogeneous plastic deformation at the microscale. Moreover, regions that suffered canning compression sustain more homogeneous plastic deformation by comparison with the corresponding regions subjected to uniaxial compression. The mitigation of inhomogeneous plastic deformation contributes to reducing the statistically stored dislocation (SSD) density in polycrystalline aggregation and also to reducing the difference of stress level in various regions of deformed NiTi SMA sample, and therefore sustaining large plastic deformation in the canning compression process.
An Examination of Parameters Affecting Large Eddy Simulations of Flow Past a Square Cylinder
NASA Technical Reports Server (NTRS)
Mankbadi, M. R.; Georgiadis, N. J.
2014-01-01
Separated flow over a bluff body is analyzed via large eddy simulations. The turbulent flow around a square cylinder features a variety of complex flow phenomena such as highly unsteady vortical structures, reverse flow in the near wall region, and wake turbulence. The formation of spanwise vortices is often times artificially suppressed in computations by either insufficient depth or a coarse spanwise resolution. As the resolution is refined and the domain extended, the artificial turbulent energy exchange between spanwise and streamwise turbulence is eliminated within the wake region. A parametric study is performed highlighting the effects of spanwise vortices where the spanwise computational domain's resolution and depth are varied. For Re=22,000, the mean and turbulent statistics computed from the numerical large eddy simulations (NLES) are in good agreement with experimental data. Von-Karman shedding is observed in the wake of the cylinder. Mesh independence is illustrated by comparing a mesh resolution of 2 million to 16 million. Sensitivities to time stepping were minimized and sampling frequency sensitivities were nonpresent. While increasing the spanwise depth and resolution can be costly, this practice was found to be necessary to eliminating the artificial turbulent energy exchange.
Summary of the Fourth AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Rider, Ben; Zickuhr, Tom; Levy, David W.; Brodersen, Olaf P.; Eisfeld, Bernhard; Crippa, Simone; Wahls, Richard A.;
2010-01-01
Results from the Fourth AIAA Drag Prediction Workshop (DPW-IV) are summarized. The workshop focused on the prediction of both absolute and differential drag levels for wing-body and wing-body-horizontal-tail configurations that are representative of transonic transport air- craft. Numerical calculations are performed using industry-relevant test cases that include lift- specific flight conditions, trimmed drag polars, downwash variations, dragrises and Reynolds- number effects. Drag, lift and pitching moment predictions from numerous Reynolds-Averaged Navier-Stokes computational fluid dynamics methods are presented. Solutions are performed on structured, unstructured and hybrid grid systems. The structured-grid sets include point- matched multi-block meshes and over-set grid systems. The unstructured and hybrid grid sets are comprised of tetrahedral, pyramid, prismatic, and hexahedral elements. Effort is made to provide a high-quality and parametrically consistent family of grids for each grid type about each configuration under study. The wing-body-horizontal families are comprised of a coarse, medium and fine grid; an optional extra-fine grid augments several of the grid families. These mesh sequences are utilized to determine asymptotic grid-convergence characteristics of the solution sets, and to estimate grid-converged absolute drag levels of the wing-body-horizontal configuration using Richardson extrapolation.
End-point diameter and total length coarse woody debris models for the United States
C.W. Woodall; J.A. Westfall; D.C. Lutes; S.N. Oswalt
2008-01-01
Coarse woody debris (CWD) may be defined as dead and down trees of a certain minimumsize that are an important forest ecosystem component (e.g., wildlife habitat, carbon stocks, and fuels). Due to field efficiency concerns, some natural resource inventories only measure the attributes of CWD pieces at their point of intersection with a sampling transect (e.g., transect...
Distance-limited perpendicular distance sampling for coarse woody debris: theory and field results
Mark J. Ducey; Micheal S. Williams; Jeffrey H. Gove; Steven Roberge; Robert S. Kenning
2013-01-01
Coarse woody debris (CWD) has been identified as an important component in many forest ecosystem processes. Perpendicular distance sampling (PDS) is one of the several efficient new methods that have been proposed for CWD inventory. One drawback of PDS is that the maximum search distance can be very large, especially if CWD diameters are large or the volume factor...
3-D modeling of ductile tearing using finite elements: Computational aspects and techniques
NASA Astrophysics Data System (ADS)
Gullerud, Arne Stewart
This research focuses on the development and application of computational tools to perform large-scale, 3-D modeling of ductile tearing in engineering components under quasi-static to mild loading rates. Two standard models for ductile tearing---the computational cell methodology and crack growth controlled by the crack tip opening angle (CTOA)---are described and their 3-D implementations are explored. For the computational cell methodology, quantification of the effects of several numerical issues---computational load step size, procedures for force release after cell deletion, and the porosity for cell deletion---enables construction of computational algorithms to remove the dependence of predicted crack growth on these issues. This work also describes two extensions of the CTOA approach into 3-D: a general 3-D method and a constant front technique. Analyses compare the characteristics of the extensions, and a validation study explores the ability of the constant front extension to predict crack growth in thin aluminum test specimens over a range of specimen geometries, absolutes sizes, and levels of out-of-plane constraint. To provide a computational framework suitable for the solution of these problems, this work also describes the parallel implementation of a nonlinear, implicit finite element code. The implementation employs an explicit message-passing approach using the MPI standard to maintain portability, a domain decomposition of element data to provide parallel execution, and a master-worker organization of the computational processes to enhance future extensibility. A linear preconditioned conjugate gradient (LPCG) solver serves as the core of the solution process. The parallel LPCG solver utilizes an element-by-element (EBE) structure of the computations to permit a dual-level decomposition of the element data: domain decomposition of the mesh provides efficient coarse-grain parallel execution, while decomposition of the domains into blocks of similar elements (same type, constitutive model, etc.) provides fine-grain parallel computation on each processor. A major focus of the LPCG solver is a new implementation of the Hughes-Winget element-by-element (HW) preconditioner. The implementation employs a weighted dependency graph combined with a new coloring algorithm to provide load-balanced scheduling for the preconditioner and overlapped communication/computation. This approach enables efficient parallel application of the HW preconditioner for arbitrary unstructured meshes.
3-D Modeling of a Nearshore Dye Release
NASA Astrophysics Data System (ADS)
Maxwell, A. R.; Hibler, L. F.; Miller, L. M.
2006-12-01
The usage of computer modeling software in predicting the behavior of a plume discharged into deep water is well established. Nearfield plume spreading in coastal areas with complex bathymetry is less commonly studied; in addition to geometry, some of the difficulties of this environment include: tidal exchange, temperature, and salinity gradients. Although some researchers have applied complex hydrodynamic models to this problem, nearfield regions are typically modeled by calibration of an empirical or expert system model. In the present study, the 3D hydrodynamic model Delft3D-FLOW was used to predict the advective transport from a point release in Sequim Bay, Washington. A nested model approach was used, wherein a coarse model using a mesh extending to nearby tide gages (cell sizes up to 1 km) was run over several tidal cycles in order to provide boundary conditions to a smaller area. The nested mesh (cell sizes up to 30 m) was forced on two open boundaries using the water surface elevation derived from the coarse model. Initial experiments with the uncalibrated model were conducted in order to predict plume propagation based on the best available field data. Field experiments were subsequently carried out by releasing rhodamine dye into the bay at near-peak flood tidal current and near high slack tidal conditions. Surface and submerged releases were carried out from an anchored vessel. Concurrently collected data from the experiment include temperature, salinity, dye concentration, and hyperspectral imagery, collected from boats and aircraft. A REMUS autonomous underwater vehicle was used to measure current velocity and dye concentration at varying depths, as well as to acquire additional bathymetric information. Preliminary results indicate that the 3D hydrodynamic model offers a reasonable prediction of plume propagation speed and shape. A sensitivity analysis is underway to determine the significant factors in effectively using the model as a predictive tool for plume tracking in data-limited environments. The Delft-PART stochastic particle transport model is also being examined to determine its utility for the present study.
Keuken, Menno; Denier van der Gon, Hugo; van der Valk, Karin
2010-09-15
From research on PM(2.5) and PM(10) in 2007/2008 in the Netherlands, it was concluded that the coarse fraction (PM(2.5-10)) attributed 60% and 50% respectively, to the urban-regional and street-urban increments of PM(10). Contrary to Scandinavian and Mediterranean countries which exhibit significant seasonal variation in the coarse fraction of particulate matter (PM), in the Netherlands the coarse fraction in PM at a street location is rather constant during the year. Non-exhaust emissions by road traffic are identified as the main source for coarse PM in urban areas. Non-exhaust emissions mainly originate from re-suspension of accumulated deposited PM and road wear related particles, while primary tire and brake wear hardly contribute to the mass of non-exhaust emissions. However, tire and brake wear can clearly be identified in the total mass through the presence of the heavy metals: zinc, a tracer for tire wear and copper, a tracer for brake wear. The efficiency of road sweeping and washing to reduce non-exhaust emissions in a street-canyon in Amsterdam was investigated. The increments of the coarse fraction at a kerbside location and a housing façade location versus the urban background were measured at days with and without sweeping and washing. It was concluded that this measure did not significantly reduce non-exhaust emissions. Copyright 2010 Elsevier B.V. All rights reserved.
High-throughput single-molecule force spectroscopy for membrane proteins
NASA Astrophysics Data System (ADS)
Bosshart, Patrick D.; Casagrande, Fabio; Frederix, Patrick L. T. M.; Ratera, Merce; Bippes, Christian A.; Müller, Daniel J.; Palacin, Manuel; Engel, Andreas; Fotiadis, Dimitrios
2008-09-01
Atomic force microscopy-based single-molecule force spectroscopy (SMFS) is a powerful tool for studying the mechanical properties, intermolecular and intramolecular interactions, unfolding pathways, and energy landscapes of membrane proteins. One limiting factor for the large-scale applicability of SMFS on membrane proteins is its low efficiency in data acquisition. We have developed a semi-automated high-throughput SMFS (HT-SMFS) procedure for efficient data acquisition. In addition, we present a coarse filter to efficiently extract protein unfolding events from large data sets. The HT-SMFS procedure and the coarse filter were validated using the proton pump bacteriorhodopsin (BR) from Halobacterium salinarum and the L-arginine/agmatine antiporter AdiC from the bacterium Escherichia coli. To screen for molecular interactions between AdiC and its substrates, we recorded data sets in the absence and in the presence of L-arginine, D-arginine, and agmatine. Altogether ~400 000 force-distance curves were recorded. Application of coarse filtering to this wealth of data yielded six data sets with ~200 (AdiC) and ~400 (BR) force-distance spectra in each. Importantly, the raw data for most of these data sets were acquired in one to two days, opening new perspectives for HT-SMFS applications.
Fog water collection effectiveness: Mesh intercomparisons
Fernandez, Daniel; Torregrosa, Alicia; Weiss-Penzias, Peter; Zhang, Bong June; Sorensen, Deckard; Cohen, Robert; McKinley, Gareth; Kleingartner, Justin; Oliphant, Andrew; Bowman, Matthew
2018-01-01
To explore fog water harvesting potential in California, we conducted long-term measurements involving three types of mesh using standard fog collectors (SFC). Volumetric fog water measurements from SFCs and wind data were collected and recorded in 15-minute intervals over three summertime fog seasons (2014–2016) at four California sites. SFCs were deployed with: standard 1.00 m2 double-layer 35% shade coefficient Raschel; stainless steel mesh coated with the MIT-14 hydrophobic formulation; and FogHa-Tin, a German manufactured, 3-dimensional spacer fabric deployed in two orientations. Analysis of 3419 volumetric samples from all sites showed strong relationships between mesh efficiency and wind speed. Raschel mesh collected 160% more fog water than FogHa-Tin at wind speeds less than 1 m s–1 and 45% less for wind speeds greater than 5 m s–1. MIT-14 coated stainless-steel mesh collected more fog water than Raschel mesh at all wind speeds. At low wind speeds of < 1 m s–1 the coated stainless steel mesh collected 3% more and at wind speeds of 4–5 m s–1, it collected 41% more. FogHa-Tin collected 5% more fog water when the warp of the weave was oriented vertically, per manufacturer specification, than when the warp of the weave was oriented horizontally. Time series measurements of three distinct mesh across similar wind regimes revealed inconsistent lags in fog water collection and inconsistent performance. Since such differences occurred under similar wind-speed regimes, we conclude that other factors play important roles in mesh performance, including in-situ fog event and aerosol dynamics that affect droplet-size spectra and droplet-to-mesh surface interactions.
Loeffler, Troy David; Chan, Henry; Narayanan, Badri; Cherukara, Mathew J; Gray, Stephen K; Sankaranarayanan, Subramanian K R S
2018-06-20
Coarse-grained molecular dynamics (MD) simulations represent a powerful approach to simulate longer time scale and larger length scale phenomena than those accessible to all-atom models. The gain in efficiency, however, comes at the cost of atomistic details. The reverse transformation, also known as back-mapping, of coarse grained beads into their atomistic constituents represents a major challenge. Most existing approaches are limited to specific molecules or specific force-fields and often rely on running a long time atomistic MD of the back-mapped configuration to arrive at an optimal solution. Such approaches are problematic when dealing with systems with high diffusion barriers. Here, we introduce a new extension of the configurational-bias-Monte-Carlo (CBMC) algorithm, which we term the crystalline-configurational-bias-Monte-Carlo (C-CBMC) algortihm, that allows rapid and efficient conversion of a coarse-grained model back into its atomistic representation. Although the method is generic, we use a coarse-grained water model as a representative example and demonstrate the back-mapping or reverse transformation for model systems ranging from the ice-liquid water interface to amorphous and crystalline ice configurations. A series of simulations using the TIP4P/Ice model are performed to compare the new CBMC method to several other standard Monte Carlo and Molecular Dynamics based back-mapping techniques. In all the cases, the C-CBMC algorithm is able to find optimal hydrogen bonded configuration many thousand evaluations/steps sooner than the other methods compared within this paper. For crystalline ice structures such as a hexagonal, cubic, and cubic-hexagonal stacking disorder structures, the C-CBMC was able to find structures that were between 0.05 and 0.1 eV/water molecule lower in energy than the ground state energies predicted by the other methods. Detailed analysis of the atomistic structures show a significantly better global hydrogen positioning when contrasted with the existing simpler back-mapping methods. Our results demonstrate the efficiency and efficacy of our new back-mapping approach, especially for crystalline systems where simple force-field based relaxations have a tendency to get trapped in local minima.
NASA Astrophysics Data System (ADS)
Li, Jian; Long, Yifei; Xu, Changcheng; Tian, Haifeng; Wu, Yanxia; Zha, Fei
2018-03-01
To resolve the drawbacks that single-mesh involved for oil/water separation, such as batch processing mode, only one phase was purified and the quick decrease in flux et al., herein, a two-way separation T-tube device was designed by integrating a pair of meshes with opposite wettability, i.e., underwater superoleophobic and superhydrophobic/superoleophilic properties. Such integrated system can continuously separate both oil and water phase from the oil/water mixtures simultaneously through one-step procedure with high flux (above 3.675 L m-2 s-1) and high separation efficiency larger than 99.8% regardless of the heavy oil or light oil involved in the mixture. Moreover, the as-prepared two meshes still maintained high separation efficiency larger than above 98.9% even after 50 cycle-usages. It worthy mentioned that this two-way separation mode essentially solves the oil liquid accumulation problem that is the single separation membrane needs to tolerate a large hydrostatic pressure caused by the accumulated liquid. We deeply believe this two-way separation system would provide a new strategy for realizing practical applications in oil spill clean-up via a continuous mode.
Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.
Li, Min; Zhang, John Zenghui; Xia, Fei
2016-04-12
Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.
Parallel goal-oriented adaptive finite element modeling for 3D electromagnetic exploration
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.; Ovall, J.; Holst, M.
2014-12-01
We present a parallel goal-oriented adaptive finite element method for accurate and efficient electromagnetic (EM) modeling of complex 3D structures. An unstructured tetrahedral mesh allows this approach to accommodate arbitrarily complex 3D conductivity variations and a priori known boundaries. The total electric field is approximated by the lowest order linear curl-conforming shape functions and the discretized finite element equations are solved by a sparse LU factorization. Accuracy of the finite element solution is achieved through adaptive mesh refinement that is performed iteratively until the solution converges to the desired accuracy tolerance. Refinement is guided by a goal-oriented error estimator that uses a dual-weighted residual method to optimize the mesh for accurate EM responses at the locations of the EM receivers. As a result, the mesh refinement is highly efficient since it only targets the elements where the inaccuracy of the solution corrupts the response at the possibly distant locations of the EM receivers. We compare the accuracy and efficiency of two approaches for estimating the primary residual error required at the core of this method: one uses local element and inter-element residuals and the other relies on solving a global residual system using a hierarchical basis. For computational efficiency our method follows the Bank-Holst algorithm for parallelization, where solutions are computed in subdomains of the original model. To resolve the load-balancing problem, this approach applies a spectral bisection method to divide the entire model into subdomains that have approximately equal error and the same number of receivers. The finite element solutions are then computed in parallel with each subdomain carrying out goal-oriented adaptive mesh refinement independently. We validate the newly developed algorithm by comparison with controlled-source EM solutions for 1D layered models and with 2D results from our earlier 2D goal oriented adaptive refinement code named MARE2DEM. We demonstrate the performance and parallel scaling of this algorithm on a medium-scale computing cluster with a marine controlled-source EM example that includes a 3D array of receivers located over a 3D model that includes significant seafloor bathymetry variations and a heterogeneous subsurface.
High efficiency virtual impactor
Loo, B.W.
1980-03-27
Environmental monitoring of atmospheric air is facilitated by a single stage virtual impactor for separating an inlet flow (Q/sub 0/) having particulate contaminants into a coarse particle flow (Q/sub 1/) and a fine particle flow (Q/sub 2/) to enable collection of such particles on different filters for separate analysis. An inlet particle acceleration nozzle and coarse particle collection probe member having a virtual impaction opening are aligned along a single axis and spaced apart to define a flow separation region at which the fine particle flow (Q/sub 2/) is drawn radially outward into a chamber while the coarse particle flow (Q/sub 1/) enters the virtual impaction opening.
Global Load Balancing with Parallel Mesh Adaption on Distributed-Memory Systems
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Oliker, Leonid; Sohn, Andrew
1996-01-01
Dynamic mesh adaptation on unstructured grids is a powerful tool for efficiently computing unsteady problems to resolve solution features of interest. Unfortunately, this causes load inbalances among processors on a parallel machine. This paper described the parallel implementation of a tetrahedral mesh adaption scheme and a new global load balancing method. A heuristic remapping algorithm is presented that assigns partitions to processors such that the redistribution coast is minimized. Results indicate that the parallel performance of the mesh adaption code depends on the nature of the adaption region and show a 35.5X speedup on 64 processors of an SP2 when 35 percent of the mesh is randomly adapted. For large scale scientific computations, our load balancing strategy gives an almost sixfold reduction in solver execution times over non-balanced loads. Furthermore, our heuristic remappier yields processor assignments that are less than 3 percent of the optimal solutions, but requires only 1 percent of the computational time.
Parallel Performance Optimizations on Unstructured Mesh-based Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas
2015-01-01
© The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
Geometrical and topological issues in octree based automatic meshing
NASA Technical Reports Server (NTRS)
Saxena, Mukul; Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is discussed. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary representation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractor. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
Octree based automatic meshing from CSG models
NASA Technical Reports Server (NTRS)
Perucchio, Renato
1987-01-01
Finite element meshes derived automatically from solid models through recursive spatial subdivision schemes (octrees) can be made to inherit the hierarchical structure and the spatial addressability intrinsic to the underlying grid. These two properties, together with the geometric regularity that can also be built into the mesh, make octree based meshes ideally suited for efficient analysis and self-adaptive remeshing and reanalysis. The element decomposition of the octal cells that intersect the boundary of the domain is emphasized. The problem, central to octree based meshing, is solved by combining template mapping and element extraction into a procedure that utilizes both constructive solid geometry and boundary respresentation techniques. Boundary cells that are not intersected by the edge of the domain boundary are easily mapped to predefined element topology. Cells containing edges (and vertices) are first transformed into a planar polyhedron and then triangulated via element extractors. The modeling environments required for the derivation of planar polyhedra and for element extraction are analyzed.
Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Nemec, Marian
2014-01-01
Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.
Hyper-Resolution Groundwater Modeling using MODFLOW 6
NASA Astrophysics Data System (ADS)
Hughes, J. D.; Langevin, C.
2017-12-01
MODFLOW 6 is the latest version of the U.S. Geological Survey's modular hydrologic model. MODFLOW 6 was developed to synthesize many of the recent versions of MODFLOW into a single program, improve the way different process models are coupled, and to provide an object-oriented framework for adding new types of models and packages. The object-oriented framework and underlying numerical solver make it possible to tightly couple any number of hyper-resolution models within coarser regional models. The hyper-resolution models can be used to evaluate local-scale groundwater issues that may be affected by regional-scale forcings. In MODFLOW 6, hyper-resolution meshes can be maintained as separate model datasets, similar to MODFLOW-LGR, which simplifies the development of a coarse regional model with imbedded hyper-resolution models from a coarse regional model. For example, the South Atlantic Coastal Plain regional water availability model was converted from a MODFLOW-2000 model to a MODFLOW 6 model. The horizontal discretization of the original model is approximately 3,218 m x 3,218 m. Hyper-resolution models of the Aiken and Sumter County water budget areas in South Carolina with a horizontal discretization of approximately 322 m x 322 m were developed and were tightly coupled to a modified version of the original coarse regional model that excluded these areas. Hydraulic property and aquifer geometry data from the coarse model were mapped to the hyper-resolution models. The discretization of the hyper-resolution models is fine enough to make detailed analyses of the effect that changes in groundwater withdrawals in the production aquifers have on the water table and surface-water/groundwater interactions. The approach used in this analysis could be applied to other regional water availability models that have been developed by the U.S. Geological Survey to evaluate local scale groundwater issues.
An optimization-based framework for anisotropic simplex mesh adaptation
NASA Astrophysics Data System (ADS)
Yano, Masayuki; Darmofal, David L.
2012-09-01
We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-01-01
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs. PMID:26582471
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-11-19
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs.
NASA Astrophysics Data System (ADS)
Kim, Dong-Ju; Kim, Hyo-Joong; Seo, Ki-Won; Kim, Ki-Hyun; Kim, Tae-Wong; Kim, Han-Ki
2015-11-01
We report on an indium-free and cost-effective Cu2O/Cu/Cu2O multilayer mesh electrode grown by room temperature roll-to-roll sputtering as a viable alternative to ITO electrodes for the cost-effective production of large-area flexible touch screen panels (TSPs). By using a low resistivity metallic Cu interlayer and a patterned mesh structure, we obtained Cu2O/Cu/Cu2O multilayer mesh electrodes with a low sheet resistance of 15.1 Ohm/square and high optical transmittance of 89% as well as good mechanical flexibility. Outer/inner bending test results showed that the Cu2O/Cu/Cu2O mesh electrode had a mechanical flexibility superior to that of conventional ITO films. Using the diamond-patterned Cu2O/Cu/Cu2O multilayer mesh electrodes, we successfully demonstrated TSPS of the flexible film-film type and rigid glass-film-film type TSPs. The TSPs with Cu2O/Cu/Cu2O mesh electrode were used to perform zoom in/out functions and multi-touch writing, indicating that these electrodes are promising cost-efficient transparent electrodes to substitute for conventional ITO electrodes in large-area flexible TSPs.
Parallel performance investigations of an unstructured mesh Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
2000-01-01
A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.
An advancing front Delaunay triangulation algorithm designed for robustness
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
A new algorithm is described for generating an unstructured mesh about an arbitrary two-dimensional configuration. Mesh points are generated automatically by the algorithm in a manner which ensures a smooth variation of elements, and the resulting triangulation constitutes the Delaunay triangulation of these points. The algorithm combines the mathematical elegance and efficiency of Delaunay triangulation algorithms with the desirable point placement features, boundary integrity, and robustness traditionally associated with advancing-front-type mesh generation strategies. The method offers increased robustness over previous algorithms in that it cannot fail regardless of the initial boundary point distribution and the prescribed cell size distribution throughout the flow-field.
Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism
NASA Astrophysics Data System (ADS)
Parish, Eric J.; Duraisamy, Karthik
2017-01-01
This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.
Drag Prediction for the DLR-F6 Wing/Body and DPW Wing using CFL3D and OVERFLOW Overset Mesh
NASA Technical Reports Server (NTRS)
Sclanfani, Anthony J.; Vassberg, John C.; Harrison, Neal A.; DeHaan, Mark A.; Rumsey, Christopher L.; Rivers, S. Melissa; Morrison, Joseph H.
2007-01-01
A series of overset grids was generated in response to the 3rd AIAA CFD Drag Prediction Workshop (DPW-III) which preceded the 25th Applied Aerodynamics Conference in June 2006. DPW-III focused on accurate drag prediction for wing/body and wing-alone configurations. The grid series built for each configuration consists of a coarse, medium, fine, and extra-fine mesh. The medium mesh is first constructed using the current state of best practices for overset grid generation. The medium mesh is then coarsened and enhanced by applying a factor of 1.5 to each (I,J,K) dimension. The resulting set of parametrically equivalent grids increase in size by a factor of roughly 3.5 from one level to the next denser level. CFD simulations were performed on the overset grids using two different RANS flow solvers: CFL3D and OVERFLOW. The results were post-processed using Richardson extrapolation to approximate grid converged values of lift, drag, pitching moment, and angle-of-attack at the design condition. This technique appears to work well if the solution does not contain large regions of separated flow (similar to that seen n the DLR-F6 results) and appropriate grid densities are selected. The extra-fine grid data helped to establish asymptotic grid convergence for both the OVERFLOW FX2B wing/body results and the OVERFLOW DPW-W1/W2 wing-alone results. More CFL3D data is needed to establish grid convergence trends. The medium grid was utilized beyond the grid convergence study by running each configuration at several angles-of-attack so drag polars and lift/pitching moment curves could be evaluated. The alpha sweep results are used to compare data across configurations as well as across flow solvers. With the exception of the wing/body drag polar, the two codes compare well qualitatively showing consistent incremental trends and similar wing pressure comparisons.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
An adaptively refined XFEM with virtual node polygonal elements for dynamic crack problems
NASA Astrophysics Data System (ADS)
Teng, Z. H.; Sun, F.; Wu, S. C.; Zhang, Z. B.; Chen, T.; Liao, D. M.
2018-02-01
By introducing the shape functions of virtual node polygonal (VP) elements into the standard extended finite element method (XFEM), a conforming elemental mesh can be created for the cracking process. Moreover, an adaptively refined meshing with the quadtree structure only at a growing crack tip is proposed without inserting hanging nodes into the transition region. A novel dynamic crack growth method termed as VP-XFEM is thus formulated in the framework of fracture mechanics. To verify the newly proposed VP-XFEM, both quasi-static and dynamic cracked problems are investigated in terms of computational accuracy, convergence, and efficiency. The research results show that the present VP-XFEM can achieve good agreement in stress intensity factor and crack growth path with the exact solutions or experiments. Furthermore, better accuracy, convergence, and efficiency of different models can be acquired, in contrast to standard XFEM and mesh-free methods. Therefore, VP-XFEM provides a suitable alternative to XFEM for engineering applications.
Li, Wen-Wei; Wang, Yun-Kun; Sheng, Guo-Ping; Gui, Yong-Xin; Yu, Lei; Xie, Tong-Qing; Yu, Han-Qing
2012-10-01
Conventional MBR has been mostly based on floc sludge and the use of costly microfiltration membranes. Here, a novel aerobic granule (AG)-mesh filter MBR (MMBR) process was developed for cost-effective wastewater treatment. During 32-day continuous operation, a predominance of granules was maintained in the system, and good filtration performance was achieved at a low trans-membrane pressure (TMP) of below 0.025 m. The granules showed a lower fouling propensity than sludge flocs, attributed to the formation of more porous biocake layer at mesh surface. A low-flux and low-TMP filtration favored a stable system operation. In addition, the reactor had high pollutant removal efficiencies, with a 91.4% chemical oxygen demand removal, 95.7% NH(4)(+) removal, and a low effluent turbidity of 4.1 NTU at the stable stage. This AG-MMBR process offers a promising technology for low-cost and efficient treatment of wastewaters. Copyright © 2012 Elsevier Ltd. All rights reserved.
Multiphase Interface Tracking with Fast Semi-Lagrangian Contouring.
Li, Xiaosheng; He, Xiaowei; Liu, Xuehui; Zhang, Jian J; Liu, Baoquan; Wu, Enhua
2016-08-01
We propose a semi-Lagrangian method for multiphase interface tracking. In contrast to previous methods, our method maintains an explicit polygonal mesh, which is reconstructed from an unsigned distance function and an indicator function, to track the interface of arbitrary number of phases. The surface mesh is reconstructed at each step using an efficient multiphase polygonization procedure with precomputed stencils while the distance and indicator function are updated with an accurate semi-Lagrangian path tracing from the meshes of the last step. Furthermore, we provide an adaptive data structure, multiphase distance tree, to accelerate the updating of both the distance function and the indicator function. In addition, the adaptive structure also enables us to contour the distance tree accurately with simple bisection techniques. The major advantage of our method is that it can easily handle topological changes without ambiguities and preserve both the sharp features and the volume well. We will evaluate its efficiency, accuracy and robustness in the results part with several examples.
NASA Technical Reports Server (NTRS)
Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.
1991-01-01
Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.
G. J. Jordan; M. J. Ducey; J. H. Gove
2004-01-01
We present the results of a timed field trial comparing the bias characteristics and relative sampling efficiency of line-intersect, fixed-area, and point relascope sampling for downed coarse woody material. Seven stands in a managed northern hardwood forest in New Hampshire were inventoried. Significant differences were found among estimates in some stands, indicating...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Tanmoy; Shell, M. Scott, E-mail: shell@engineering.ucsb.edu
Bottom-up multiscale techniques are frequently used to develop coarse-grained (CG) models for simulations at extended length and time scales but are often limited by a compromise between computational efficiency and accuracy. The conventional approach to CG nonbonded interactions uses pair potentials which, while computationally efficient, can neglect the inherently multibody contributions of the local environment of a site to its energy, due to degrees of freedom that were coarse-grained out. This effect often causes the CG potential to depend strongly on the overall system density, composition, or other properties, which limits its transferability to states other than the one atmore » which it was parameterized. Here, we propose to incorporate multibody effects into CG potentials through additional nonbonded terms, beyond pair interactions, that depend in a mean-field manner on local densities of different atomic species. This approach is analogous to embedded atom and bond-order models that seek to capture multibody electronic effects in metallic systems. We show that the relative entropy coarse-graining framework offers a systematic route to parameterizing such local density potentials. We then characterize this approach in the development of implicit solvation strategies for interactions between model hydrophobes in an aqueous environment.« less
Creating wi-fi bluetooth mesh network for crisis management applications
NASA Astrophysics Data System (ADS)
Al-Tekreeti, Safa; Adams, Christopher; Al-Jawad, Naseer
2010-04-01
This paper proposes a wireless mesh network implementation consisting of both Wi-Fi Ad-Hoc networks as well as Bluetooth Piconet/Scatternet networks, organised in an energy and throughput efficient structure. This type of networks can be easily constructed for Crises management applications, for example in an Earthquake disaster. The motivation of this research is to form mesh network from the mass availability of WiFi and Bluetooth enabled electronic devices such as mobile phones and PC's that are normally present in most regions were major crises occurs. The target of this study is to achieve an effective solution that will enable Wi-Fi and/or Bluetooth nodes to seamlessly configure themselves to act as a bridge between their own network and that of the other network to achieve continuous routing for our proposed mesh networks.
Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei
2017-02-27
Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.
Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS
NASA Astrophysics Data System (ADS)
Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan
2016-11-01
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume. This work is funded by NSF under Grant DMS-1222550.
Brownian dynamics simulations of lipid bilayer membrane with hydrodynamic interactions in LAMMPS
NASA Astrophysics Data System (ADS)
Fu, Szu-Pei; Young, Yuan-Nan; Peng, Zhangli; Yuan, Hongyan
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiency has been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane (such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity). In this work we implement such interaction potential in LAMMPS to simulate large-scale lipid systems such as vesicles and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for red blood cell (RBC) dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. We focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with varying enclosed volume; 2. RBC shape transitions with different enclosed volume.
Improved methods of vibration analysis of pretwisted, airfoil blades
NASA Technical Reports Server (NTRS)
Subrahmanyam, K. B.; Kaza, K. R. V.
1984-01-01
Vibration analysis of pretwisted blades of asymmetric airfoil cross section is performed by using two mixed variational approaches. Numerical results obtained from these two methods are compared to those obtained from an improved finite difference method and also to those given by the ordinary finite difference method. The relative merits, convergence properties and accuracies of all four methods are studied and discussed. The effects of asymmetry and pretwist on natural frequencies and mode shapes are investigated. The improved finite difference method is shown to be far superior to the conventional finite difference method in several respects. Close lower bound solutions are provided by the improved finite difference method for untwisted blades with a relatively coarse mesh while the mixed methods have not indicated any specific bound.
Model comparisons of the reactive burn model SURF in three ASC codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, Von Howard; Stalsberg, Krista Lynn; Reichelt, Benjamin Lee
A study of the SURF reactive burn model was performed in FLAG, PAGOSA and XRAGE. In this study, three different shock-to-detonation transition experiments were modeled in each code. All three codes produced similar model results for all the experiments modeled and at all resolutions. Buildup-to-detonation time, particle velocities and resolution dependence of the models was notably similar between the codes. Given the current PBX 9502 equations of state and SURF calibrations, each code is equally capable of predicting the correct detonation time and distance when impacted by a 1D impactor at pressures ranging from 10-16 GPa, as long as themore » resolution of the mesh is not too coarse.« less
Comparison of Several Dissipation Algorithms for Central Difference Schemes
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Radespiel, R.; Turkel, E.
1997-01-01
Several algorithms for introducing artificial dissipation into a central difference approximation to the Euler and Navier Stokes equations are considered. The focus of the paper is on the convective upwind and split pressure (CUSP) scheme, which is designed to support single interior point discrete shock waves. This scheme is analyzed and compared in detail with scalar and matrix dissipation (MATD) schemes. Resolution capability is determined by solving subsonic, transonic, and hypersonic flow problems. A finite-volume discretization and a multistage time-stepping scheme with multigrid are used to compute solutions to the flow equations. Numerical results are also compared with either theoretical solutions or experimental data. For transonic airfoil flows the best accuracy on coarse meshes for aerodynamic coefficients is obtained with a simple MATD scheme.
NASA Astrophysics Data System (ADS)
Minakov, A.; Platonov, D.; Sentyabov, A.; Gavrilov, A.
2017-01-01
We performed numerical simulation of flow in a laboratory model of a Francis hydroturbine at three regimes, using two eddy-viscosity- (EVM) and a Reynolds stress (RSM) RANS models (realizable k-ɛ, k-ω SST, LRR) and detached-eddy-simulations (DES), as well as large-eddy simulations (LES). Comparison of calculation results with the experimental data was carried out. Unlike the linear EVMs, the RSM, DES, and LES reproduced well the mean velocity components, and pressure pulsations in the diffusor draft tube. Despite relatively coarse meshes and insufficient resolution of the near-wall region, LES, DES also reproduced well the intrinsic flow unsteadiness and the dominant flow structures and the associated pressure pulsations in the draft tube.
MC21 analysis of the MIT PWR benchmark: Hot zero power results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly Iii, D. J.; Aviles, B. N.; Herman, B. R.
2013-07-01
MC21 Monte Carlo results have been compared with hot zero power measurements from an operating pressurized water reactor (PWR), as specified in a new full core PWR performance benchmark from the MIT Computational Reactor Physics Group. Included in the comparisons are axially integrated full core detector measurements, axial detector profiles, control rod bank worths, and temperature coefficients. Power depressions from grid spacers are seen clearly in the MC21 results. Application of Coarse Mesh Finite Difference (CMFD) acceleration within MC21 has been accomplished, resulting in a significant reduction of inactive batches necessary to converge the fission source. CMFD acceleration has alsomore » been shown to work seamlessly with the Uniform Fission Site (UFS) variance reduction method. (authors)« less
2016-09-08
Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM
Promoting Wired Links in Wireless Mesh Networks: An Efficient Engineering Solution
Barekatain, Behrang; Raahemifar, Kaamran; Ariza Quintana, Alfonso; Triviño Cabrera, Alicia
2015-01-01
Wireless Mesh Networks (WMNs) cannot completely guarantee good performance of traffic sources such as video streaming. To improve the network performance, this study proposes an efficient engineering solution named Wireless-to-Ethernet-Mesh-Portal-Passageway (WEMPP) that allows effective use of wired communication in WMNs. WEMPP permits transmitting data through wired and stable paths even when the destination is in the same network as the source (Intra-traffic). Tested with four popular routing protocols (Optimized Link State Routing or OLSR as a proactive protocol, Dynamic MANET On-demand or DYMO as a reactive protocol, DYMO with spanning tree ability and HWMP), WEMPP considerably decreases the end-to-end delay, jitter, contentions and interferences on nodes, even when the network size or density varies. WEMPP is also cost-effective and increases the network throughput. Moreover, in contrast to solutions proposed by previous studies, WEMPP is easily implemented by modifying the firmware of the actual Ethernet hardware without altering the routing protocols and/or the functionality of the IP/MAC/Upper layers. In fact, there is no need for modifying the functionalities of other mesh components in order to work with WEMPPs. The results of this study show that WEMPP significantly increases the performance of all routing protocols, thus leading to better video quality on nodes. PMID:25793516
Spectral turning bands for efficient Gaussian random fields generation on GPUs and accelerators
NASA Astrophysics Data System (ADS)
Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.
2015-11-01
A random field (RF) is a set of correlated random variables associated with different spatial locations. RF generation algorithms are of crucial importance for many scientific areas, such as astrophysics, geostatistics, computer graphics, and many others. Current approaches commonly make use of 3D fast Fourier transform (FFT), which does not scale well for RF bigger than the available memory; they are also limited to regular rectilinear meshes. We introduce random field generation with the turning band method (RAFT), an RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs and accelerators. Our algorithm replaces the 3D FFT with a lower-order, one-dimensional FFT followed by a projection step and is further optimized with loop unrolling and blocking. RAFT can easily generate RF on non-regular (non-uniform) meshes and efficiently produce fields with mesh sizes bigger than the available device memory by using a streaming, out-of-core approach. Our algorithm generates RF with the correct statistical behavior and is tested on a variety of modern hardware, such as NVIDIA Tesla, AMD FirePro and Intel Phi. RAFT is faster than the traditional methods on regular meshes and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.
Berrevoet, F; Tollens, T; Berwouts, L; Bertrand, C; Muysoms, F; De Gols, J; Meir, E; De Backer, A
2014-01-01
A variety of anti-adhesive composite mesh products have become available to use inside the peritoneal cavity. However, reimbursement of these meshes by the Belgian Governemental Health Agency (RIZIV/INAMI) can only be obtained after conducting a prospective study with at least one year of clinical follow-up. This -Belgian multicentric cohort study evaluated the experience with the use of Proceed®-mesh in laparoscopic ventral hernia repair. During a 25 month period 210 adult patients underwent a laparoscopic primary or incisional hernia repair using an intra-abdominal placement of Proceed®-mesh. According to RIZIV/INAMI criteria recurrence rate after 1 year was the primary objective, while postoperative morbidity, including seroma formation, wound and mesh infections, quality of life and recurrences after 2 years were evaluated as secondary endpoints (NCT00572962). In total 97 primary ventral and 103 incisional hernias were repaired, of which 28 (13%) were recurrent. There were no conversions to open repair, no enterotomies, no mesh infections and no mortality. One year cumulative follow-up showed 10 recurrences (n = 192, 5.2%) and chronic discomfort or pain in 4.7% of the patients. Quality of life could not be analyzed due to incomplete data set. More than 5 years after introduction of this mesh to the market, this prospective multicentric study documents a favorable experience with the Proceed mesh in laparoscopic ventral hernia repair. However, it remains to be discussed whether reimbursement of these meshes in Belgium should be limited to the current strict criteria and therefore can only be obtained after at least 3-4 years of clinical data gathering and necessary follow-up. Copyright© Acta Chirurgica Belgica.
Pascual, Gemma; Hernández-Gascón, Belén; Rodríguez, Marta; Sotomayor, Sandra; Peña, Estefania; Calvo, Begoña; Bellón, Juan M
2012-11-01
Although heavyweight (HW) or lightweight (LW) polypropylene (PP) meshes are widely used for hernia repair, other alternatives have recently appeared. They have the same large-pore structure yet are composed of polytetrafluoroethylene (PTFE). This study compares the long-term (3 and 6 months) behavior of meshes of different pore size (HW compared with LW) and composition (PP compared with PTFE). Partial defects were created in the lateral wall of the abdomen in New Zealand White rabbits and then repaired by the use of a HW or LW PP mesh or a new monofilament, large-pore PTFE mesh (Infinit). At 90 and 180 days after implantation, tissue incorporation, gene and protein expression of neocollagens (reverse transcription-polymerase chain reaction/immunofluorescence), macrophage response (immunohistochemistry), and biomechanical strength were determined. Shrinkage was measured at 90 days. All three meshes induced good host tissue ingrowth, yet the macrophage response was significantly greater in the PTFE implants (P < .05). Collagen 1/3 mRNA levels failed to vary at 90 days yet in the longer term, the LW meshes showed the reduced genetic expression of both collagens (P < .05) accompanied by increased neocollagen deposition, indicating more efficient mRNA translation. After 90-180 days of implant, tensile strengths and elastic modulus values were similar for all 3 implants (P > .05). Host collagen deposition is mesh pore size dependent whereas the macrophage response induced is composition dependent with a greater response shown by PTFE. In the long term, macroporous meshes show comparable biomechanical behavior regardless of their pore size or composition. Copyright © 2012 Mosby, Inc. All rights reserved.
Parallel Tetrahedral Mesh Adaptation with Dynamic Load Balancing
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Gabow, Harold N.
1999-01-01
The ability to dynamically adapt an unstructured grid is a powerful tool for efficiently solving computational problems with evolving physical features. In this paper, we report on our experience parallelizing an edge-based adaptation scheme, called 3D_TAG. using message passing. Results show excellent speedup when a realistic helicopter rotor mesh is randomly refined. However. performance deteriorates when the mesh is refined using a solution-based error indicator since mesh adaptation for practical problems occurs in a localized region., creating a severe load imbalance. To address this problem, we have developed PLUM, a global dynamic load balancing framework for adaptive numerical computations. Even though PLUM primarily balances processor workloads for the solution phase, it reduces the load imbalance problem within mesh adaptation by repartitioning the mesh after targeting edges for refinement but before the actual subdivision. This dramatically improves the performance of parallel 3D_TAG since refinement occurs in a more load balanced fashion. We also present optimal and heuristic algorithms that, when applied to the default mapping of a parallel repartitioner, significantly reduce the data redistribution overhead. Finally, portability is examined by comparing performance on three state-of-the-art parallel machines.
Transparent thin shield for radio frequency transmit coils.
Rivera, Debra S; Schulz, Jessica; Siegert, Thomas; Zuber, Verena; Turner, Robert
2015-02-01
To identify a shielding material compatible with optical head-motion tracking for prospective motion correction and which minimizes radio frequency (RF) radiation losses at 7 T without sacrificing line-of-sight to an imaging target. We evaluated a polyamide mesh coated with silver. The thickness of the coating was approximated from the composition ratio provided by the material vendor and validated by an estimate derived from electrical conductivity and light transmission measurements. The performance of the shield is compared to a split-copper shield in the context of a four-channel transmit-only loop array. The mesh contains less than a skin-depth of silver coating (300 MHz) and attenuates light by 15 %. Elements of the array vary less in the presence of the mesh shield as compared to the split-copper shield indicating that the array behaves more symmetrically with the mesh shield. No degradation of transmit efficiency was observed for the mesh as compared to the split-copper shield. We present a shield compatible with future integration of camera-based motion-tracking systems. Based on transmit performance and eddy-current evaluations the mesh shield is appropriate for use at 7 T.
Mammoth grazers on the ocean's minuteness: a review of selective feeding using mucous meshes
2018-01-01
Mucous-mesh grazers (pelagic tunicates and thecosome pteropods) are common in oceanic waters and efficiently capture, consume and repackage particles many orders of magnitude smaller than themselves. They feed using an adhesive mucous mesh to capture prey particles from ambient seawater. Historically, their grazing process has been characterized as non-selective, depending only on the size of the prey particle and the pore dimensions of the mesh. The purpose of this review is to reverse this assumption by reviewing recent evidence that shows mucous-mesh feeding can be selective. We focus on large planktonic microphages as a model of selective mucus feeding because of their important roles in the ocean food web: as bacterivores, prey for higher trophic levels, and exporters of carbon via mucous aggregates, faecal pellets and jelly-falls. We identify important functional variations in the filter mechanics and hydrodynamics of different taxa. We review evidence that shows this feeding strategy depends not only on the particle size and dimensions of the mesh pores, but also on particle shape and surface properties, filter mechanics, hydrodynamics and grazer behaviour. As many of these organisms remain critically understudied, we conclude by suggesting priorities for future research. PMID:29720410
NASA Astrophysics Data System (ADS)
Jones, Adam; Utyuzhnikov, Sergey
2017-08-01
Turbulent flow in a ribbed channel is studied using an efficient near-wall domain decomposition (NDD) method. The NDD approach is formulated by splitting the computational domain into an inner and outer region, with an interface boundary between the two. The computational mesh covers the outer region, and the flow in this region is solved using the open-source CFD code Code_Saturne with special boundary conditions on the interface boundary, called interface boundary conditions (IBCs). The IBCs are of Robin type and incorporate the effect of the inner region on the flow in the outer region. IBCs are formulated in terms of the distance from the interface boundary to the wall in the inner region. It is demonstrated that up to 90% of the region between the ribs in the ribbed passage can be removed from the computational mesh with an error on the friction factor within 2.5%. In addition, computations with NDD are faster than computations based on low Reynolds number (LRN) models by a factor of five. Different rib heights can be studied with the same mesh in the outer region without affecting the accuracy of the friction factor. This is tested with six different rib heights in an example of a design optimisation study. It is found that the friction factors computed with NDD are almost identical to the fully-resolved results. When used for inverse problems, NDD is considerably more efficient than LRN computations because only one computation needs to be performed and only one mesh needs to be generated.
Li, Na; Hu, Yi; Lu, Yong-Ze; Zeng, Raymond J; Sheng, Guo-Ping
2016-07-01
In the recent years, anaerobic membrane bioreactor (AnMBR) technology is being considered as a very attractive alternative for wastewater treatment due to the striking advantages such as upgraded effluent quality. However, fouling control is still a problem for the application of AnMBR. This study investigated the performance of an AnMBR using mesh filter as support material to treat low-strength wastewater via in-situ biogas sparging. It was found that mesh AnMBR exhibited high and stable chemical oxygen demand (COD) removal efficiencies with values of 95 ± 5 % and an average methane yield of 0.24 L CH4/g CODremoved. Variation of transmembrane pressure (TMP) during operation indicated that mesh fouling was mitigated by in-situ biogas sparging and the fouling rate was comparable to that of aerobic membrane bioreactor with mesh filter reported in previous researches. The fouling layer formed on the mesh exhibited non-uniform structure; the porosity became larger from bottom layer to top layer. Biogas sparging could not change the composition but make thinner thickness of cake layer, which might be benefit for reducing membrane fouling rate. It was also found that ultrasonic cleaning of fouled mesh was able to remove most foulants on the surface or pores. This study demonstrated that in-situ biogas sparging enhanced the performance of AnMBRs with mesh filter in low-strength wastewater treatment. Apparently, AnMBRs with mesh filter can be used as a promising and sustainable technology for wastewater treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less
NASA Astrophysics Data System (ADS)
Yu, Peng; Lian, Zhongxu; Xu, Jinkai; Yu, Zhanjiang; Ren, Wanfei; Yu, Huadong
2018-04-01
In this paper, a lot of micron-sized sand granular structures were formed on the substrate of the stainless steel mesh (SSM) by laser treatment. The rough surface with sand granular structures showed superhydrophilic in air and superoleophobic under water. With its special wettability, the SSM by laser treatment could achieve the separation of the oil/water mixture, showing good durability and high separation efficiency, which was very useful in the practical application of large-scale oil/water separation facility for reducing the impacts of oil leaked on the environment. In addition, it showed that the laser-treated SSM had a very high separation rate. The development of the laser-treated SSM is a simple, environmental, economical and high-efficiency method, which provides a new approach to the production of high efficiency facilities for oil/water separation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, David J., E-mail: dhardy@illinois.edu; Schulten, Klaus; Wolff, Matthew A.
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation methodmore » (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle–mesh Ewald method falls short.« less
Hardy, David J; Wolff, Matthew A; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D
2016-03-21
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
NASA Astrophysics Data System (ADS)
Hardy, David J.; Wolff, Matthew A.; Xia, Jianlin; Schulten, Klaus; Skeel, Robert D.
2016-03-01
The multilevel summation method for calculating electrostatic interactions in molecular dynamics simulations constructs an approximation to a pairwise interaction kernel and its gradient, which can be evaluated at a cost that scales linearly with the number of atoms. The method smoothly splits the kernel into a sum of partial kernels of increasing range and decreasing variability with the longer-range parts interpolated from grids of increasing coarseness. Multilevel summation is especially appropriate in the context of dynamics and minimization, because it can produce continuous gradients. This article explores the use of B-splines to increase the accuracy of the multilevel summation method (for nonperiodic boundaries) without incurring additional computation other than a preprocessing step (whose cost also scales linearly). To obtain accurate results efficiently involves technical difficulties, which are overcome by a novel preprocessing algorithm. Numerical experiments demonstrate that the resulting method offers substantial improvements in accuracy and that its performance is competitive with an implementation of the fast multipole method in general and markedly better for Hamiltonian formulations of molecular dynamics. The improvement is great enough to establish multilevel summation as a serious contender for calculating pairwise interactions in molecular dynamics simulations. In particular, the method appears to be uniquely capable for molecular dynamics in two situations, nonperiodic boundary conditions and massively parallel computation, where the fast Fourier transform employed in the particle-mesh Ewald method falls short.
Mora-Gómez, Juanita; Elosegi, Arturo; Duarte, Sofia; Cássio, Fernanda; Pascoal, Cláudia; Romaní, Anna M
2016-08-01
Microorganisms are key drivers of leaf litter decomposition; however, the mechanisms underlying the dynamics of different microbial groups are poorly understood. We investigated the effects of seasonal variation and invertebrates on fungal and bacterial dynamics, and on leaf litter decomposition. We followed the decomposition of Populus nigra litter in a Mediterranean stream through an annual cycle, using fine and coarse mesh bags. Irrespective of the season, microbial decomposition followed two stages. Initially, bacterial contribution to total microbial biomass was higher compared to later stages, and it was related to disaccharide and lignin degradation; in a later stage, bacteria were less important and were associated with hemicellulose and cellulose degradation, while fungi were related to lignin decomposition. The relevance of microbial groups in decomposition differed among seasons: fungi were more important in spring, whereas in summer, water quality changes seemed to favour bacteria and slowed down lignin and hemicellulose degradation. Invertebrates influenced litter-associated microbial assemblages (especially bacteria), stimulated enzyme efficiencies and reduced fungal biomass. We conclude that bacterial and fungal assemblages play distinctive roles in microbial decomposition and differ in their sensitivity to environmental changes, ultimately affecting litter decomposition, which might be particularly relevant in highly seasonal ecosystems, such as intermittent streams. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Neurosurgery simulation using non-linear finite element modeling and haptic interaction
NASA Astrophysics Data System (ADS)
Lee, Huai-Ping; Audette, Michel; Joldes, Grand R.; Enquobahrie, Andinet
2012-02-01
Real-time surgical simulation is becoming an important component of surgical training. To meet the realtime requirement, however, the accuracy of the biomechancial modeling of soft tissue is often compromised due to computing resource constraints. Furthermore, haptic integration presents an additional challenge with its requirement for a high update rate. As a result, most real-time surgical simulation systems employ a linear elasticity model, simplified numerical methods such as the boundary element method or spring-particle systems, and coarse volumetric meshes. However, these systems are not clinically realistic. We present here an ongoing work aimed at developing an efficient and physically realistic neurosurgery simulator using a non-linear finite element method (FEM) with haptic interaction. Real-time finite element analysis is achieved by utilizing the total Lagrangian explicit dynamic (TLED) formulation and GPU acceleration of per-node and per-element operations. We employ a virtual coupling method for separating deformable body simulation and collision detection from haptic rendering, which needs to be updated at a much higher rate than the visual simulation. The system provides accurate biomechancial modeling of soft tissue while retaining a real-time performance with haptic interaction. However, our experiments showed that the stability of the simulator depends heavily on the material property of the tissue and the speed of colliding objects. Hence, additional efforts including dynamic relaxation are required to improve the stability of the system.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
Parallel implementation of an adaptive scheme for 3D unstructured grids on the SP2
NASA Technical Reports Server (NTRS)
Strawn, Roger C.; Oliker, Leonid; Biswas, Rupak
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.0X speedup on 64 processors when 10 percent of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all the mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Parallel Implementation of an Adaptive Scheme for 3D Unstructured Grids on the SP2
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Biswas, Rupak; Strawn, Roger C.
1996-01-01
Dynamic mesh adaption on unstructured grids is a powerful tool for computing unsteady flows that require local grid modifications to efficiently resolve solution features. For this work, we consider an edge-based adaption scheme that has shown good single-processor performance on the C90. We report on our experience parallelizing this code for the SP2. Results show a 47.OX speedup on 64 processors when 10% of the mesh is randomly refined. Performance deteriorates to 7.7X when the same number of edges are refined in a highly-localized region. This is because almost all mesh adaption is confined to a single processor. However, this problem can be remedied by repartitioning the mesh immediately after targeting edges for refinement but before the actual adaption takes place. With this change, the speedup improves dramatically to 43.6X.
Wang, Tianyang; Chu, Fulei; Han, Qinkai
2017-03-01
Identifying the differences between the spectra or envelope spectra of a faulty signal and a healthy baseline signal is an efficient planetary gearbox local fault detection strategy. However, causes other than local faults can also generate the characteristic frequency of a ring gear fault; this may further affect the detection of a local fault. To address this issue, a new filtering algorithm based on the meshing resonance phenomenon is proposed. In detail, the raw signal is first decomposed into different frequency bands and levels. Then, a new meshing index and an MRgram are constructed to determine which bands belong to the meshing resonance frequency band. Furthermore, an optimal filter band is selected from this MRgram. Finally, the ring gear fault can be detected according to the envelope spectrum of the band-pass filtering result. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling
NASA Astrophysics Data System (ADS)
Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo
2017-06-01
Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.
Research on regional numerical weather prediction
NASA Technical Reports Server (NTRS)
Kreitzberg, C. W.
1976-01-01
Extension of the predictive power of dynamic weather forecasting to scales below the conventional synoptic or cyclonic scales in the near future is assessed. Lower costs per computation, more powerful computers, and a 100 km mesh over the North American area (with coarser mesh extending beyond it) are noted at present. Doubling the resolution even locally (to 50 km mesh) would entail a 16-fold increase in costs (including vertical resolution and halving the time interval), and constraints on domain size and length of forecast. Boundary conditions would be provided by the surrounding 100 km mesh, and time-varying lateral boundary conditions can be considered to handle moving phenomena. More physical processes to treat, more efficient numerical techniques, and faster computers (improved software and hardware) backing up satellite and radar data could produce further improvements in forecasting in the 1980s. Boundary layer modeling, initialization techniques, and quantitative precipitation forecasting are singled out among key tasks.
Validation of 3D RANS-SA Calculations on Strand/Cartesian Meshes
2014-01-07
a parallel environment. This allows for significant gains in efficiency and scalability of domain connectiv- ity, effectively eliminating inter... equation of state , p = ρRT is used to close the equations . 4 of 22 American Institute of Aeronautics and Astronautics 6 III.A. Discretization and...Utah State University 1415 Old Main Hill - Room 64 Logan, UT 84322 -1415 1 ABSTRACT Validation of 3D RANS-SA Calculations on Strand/Cartesian Meshes
Quantum decimation in Hilbert space: Coarse graining without structure
NASA Astrophysics Data System (ADS)
Singh, Ashmeet; Carroll, Sean M.
2018-03-01
We present a technique to coarse grain quantum states in a finite-dimensional Hilbert space. Our method is distinguished from other approaches by not relying on structures such as a preferred factorization of Hilbert space or a preferred set of operators (local or otherwise) in an associated algebra. Rather, we use the data corresponding to a given set of states, either specified independently or constructed from a single state evolving in time. Our technique is based on principle component analysis (PCA), and the resulting coarse-grained quantum states live in a lower-dimensional Hilbert space whose basis is defined using the underlying (isometric embedding) transformation of the set of fine-grained states we wish to coarse grain. Physically, the transformation can be interpreted to be an "entanglement coarse-graining" scheme that retains most of the global, useful entanglement structure of each state, while needing fewer degrees of freedom for its reconstruction. This scheme could be useful for efficiently describing collections of states whose number is much smaller than the dimension of Hilbert space, or a single state evolving over time.
NASA Astrophysics Data System (ADS)
Jiang, Bin; Zhang, Hongjie; Sun, Yongli; Zhang, Luhong; Xu, Lidong; Hao, Li; Yang, Huawei
2017-06-01
A superhydrophobic and superoleophilic stainless steel (SS) mesh for oil/water separation has been developed by using a novel, facile and inexpensive covalent layer-by-layer grafting (LBLG) method. Hierarchical micro/nanostructure surface was formed through grafting the (3-aminopropyl) triethoxysilane (SCA), polyethylenimine (PEI) and trimesoyl chloride (TMC) onto the mesh in sequence, accompanied with SiO2 nanoparticles subtly and firmly anchored in multilayers. Superhydrophobic characteristic was realized by self-assembly grafting of hydrophobic groups onto the surface. The as-prepared mesh exhibits excellent superhydrophobicity with a water contact angle of 159°. Moreover, with a low sliding angle of 4°, it shows the "lotus effect" for self-cleaning. As for application evaluation, the as-prepared mesh can be used for large-scale separation of oil/water mixtures with a relatively high separation efficiency after 30 times reuse (99.88% for n-octane/water mixture) and a high intrusion pressure (3.58 kPa). More importantly, the mesh exhibited excellent stability in the case of vibration situation, long-term storage as well as saline corrosion conditions, and the compatible pH range was determined to be 1-13. In summary, this work provides a brand new method of modifying SS mesh in a covalent LBLG way, and makes it possible to introduce various functionalized groups onto the surface.
NASA Astrophysics Data System (ADS)
Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan
2015-10-01
Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.
NASA Technical Reports Server (NTRS)
Hoerz, Friedrich; Cintala, Mark J.; Bernhard, Ronald P.; Cardenas, Frank; Davidson, William; Haynes, Gerald; See, Thomas H.; Winkler, Jerry; Gray, Barry
1993-01-01
The utility of multiple-mesh targets as potential lightweight shields to protect spacecraft in low-Earth orbit against collisional damage is explored. Earlier studies revealed that single meshes comminute hypervelocity impactors with efficiencies comparable to contiguous targets. Multiple interaction of projectile fragments with any number of meshes should lead to increased comminution, deceleration, and dispersion of the projectile, such that all debris exiting the mesh stack possesses low specific energies (ergs/sq cm) that would readily be tolerated by many flight systems. The study is conceptually exploring the sensitivity of major variables such as impact velocity, the specific areal mass (g/sq cm) of the total mesh stack (SM), and the separation distance (S) between individual meshes. Most experiments employed five or ten meshes with total SM typically less than 0.5 the specific mass of the impactor, and silicate glass impactors rather than metal projectiles. While projectile comminution increases with increasing impact velocity due to progressively higher shock stresses, encounters with multiple-meshes at low velocity (1-2 km/s) already lead to significant disruption of the glass impactors, with the resulting fragments being additionally decelerated and dispersed by subsequent meshes, and, unlike most contiguous single-plate bumpers, leading to respectable performance at low velocity. Total specific bumper mass must be the subject of careful trade-off studies; relatively massive bumpers will generate too much debris being dislodged from the bumper itself, while exceptionally lightweight designs will not cause sufficient comminution, deceleration, or dispersion of the impactor. Separation distance was found to be a crucial design parameter, as it controls the dispersion of the fragment cloud. Substantial mass savings could result if maximum separation distances were employed. The total mass of debris dislodged by multiple-mesh stacks is modestly smaller than that of single, contiguous-membrane shields. The cumulative surface area of all penetration holes in multiple mesh stacks is an order of magnitude smaller than that in analog multiple-foil shields, suggesting good long-term performance of the mesh designs. Due to different experimental conditions, direct and quantitative comparison with other lightweight shields is not possible at present.
Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Clark, Tristan
A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou, E-mail: yuzhousun@126.com; Chen, Gensheng; Li, Dongxia
2016-06-08
This paper attempts to study the application of mesh-free method in the numerical simulations of the higher-order continuum structures. A high-order bending beam considers the effect of the third-order derivative of deflections, and can be viewed as a one-dimensional higher-order continuum structure. The moving least-squares method is used to construct the shape function with the high-order continuum property, the curvature and the third-order derivative of deflections are directly interpolated with nodal variables and the second- and third-order derivative of the shape function, and the mesh-free computational scheme is establish for beams. The coupled stress theory is introduced to describe themore » special constitutive response of the layered rock mass in which the bending effect of thin layer is considered. The strain and the curvature are directly interpolated with the nodal variables, and the mesh-free method is established for the layered rock mass. The good computational efficiency is achieved based on the developed mesh-free method, and some key issues are discussed.« less
NASA Astrophysics Data System (ADS)
Wang, Qibin; Zhao, Bo; Fu, Yang; Kong, Xianguang; Ma, Hui
2018-06-01
An improved time-varying mesh stiffness (TVMS) model of a helical gear pair is proposed, in which the total mesh stiffness contains not only the common transverse tooth bending stiffness, transverse tooth shear stiffness, transverse tooth radial compressive stiffness, transverse gear foundation stiffness and Hertzian contact stiffness, but also the axial tooth bending stiffness, axial tooth torsional stiffness and axial gear foundation stiffness proposed in this paper. In addition, a rapid TVMS calculation method is proposed. Considering each stiffness component, the TVMS can be calculated by the integration along the tooth width direction. Then, three cases are applied to validate the developed model. The results demonstrate that the proposed analytical method is accurate, effective and efficient for helical gear pairs and the axial mesh stiffness should be taken into consideration in the TVMS of a helical gear pair. Finally, influences of the helix angle on TVMS are studied. The results show that the improved TVMS model is effective for any helix angle and the traditional TVMS model is only effective under a small helix angle.
Application of closed-form solutions to a mesh point field in silicon solar cells
NASA Technical Reports Server (NTRS)
Lamorte, M. F.
1985-01-01
A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.
Towards a large-scale scalable adaptive heart model using shallow tree meshes
NASA Astrophysics Data System (ADS)
Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf
2015-10-01
Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.
NASA Astrophysics Data System (ADS)
de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit
2008-11-01
Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
NASA Astrophysics Data System (ADS)
Chen, Ying; Lowengrub, John; Shen, Jie; Wang, Cheng; Wise, Steven
2018-07-01
We develop efficient energy stable numerical methods for solving isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization. The scheme, which involves adaptive mesh refinement and a nonlinear multigrid finite difference method, is constructed based on a convex splitting approach. We prove that, for the isotropic Cahn-Hilliard system with the Willmore regularization, the total free energy of the system is non-increasing for any time step and mesh sizes. A straightforward modification of the scheme is then used to solve the regularized strongly anisotropic Cahn-Hilliard system, and it is numerically verified that the discrete energy of the anisotropic system is also non-increasing, and can be efficiently solved by using the modified stable method. We present numerical results in both two and three dimensions that are in good agreement with those in earlier work on the topics. Numerical simulations are presented to demonstrate the accuracy and efficiency of the proposed methods.
Efficient Use of Distributed Systems for Scientific Applications
NASA Technical Reports Server (NTRS)
Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques
2000-01-01
Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.
Haxton, Thomas K
2015-03-10
We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.
[Skeleton extractions and applications].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quadros, William Roshan
2010-05-01
This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at leastmore » two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.« less
2016-09-01
Hernia formation occurs at closed stoma sites in up to 30% of patients. The Reinforcement of Closure of Stoma Site (ROCSS) randomized controlled trial is evaluating whether placement of biological mesh during stoma closure safely reduces hernia rates compared with closure without mesh, without increasing surgical or wound complications. This paper aims to report recruitment, deliverability and safety from the internal feasibility study. A multicentre, patient and assessor blinded, randomized controlled trial, delivered through surgical trainee research networks. A 90-patient internal feasibility study assessed recruitment, randomization, deliverability and early (30 day) safety of the novel surgical technique (ClinicalTrials.gov registration number NCT02238964). The feasibility study recruited 90 patients from the 104 considered for entry (45 to mesh, 45 to no mesh). Seven of eight participating centres randomized patients within 30 days of opening. Overall, 41% of stomas were created for malignant disease and 73% were ileostomies. No mesh-specific complications occurred. Thirty-one postoperative adverse events were experienced by 31 patients, including surgical site infection (9%) and postoperative ileus (6%). One mesh was removed for re-access to the abdominal cavity, for reasons unrelated to the mesh. Independent review by the Data Monitoring and Ethics Committee of adverse event data by treatment allocation found no safety concerns. Multicentre randomization to this trial of biological mesh is feasible, with no early safety concerns. Progression to the full Phase III trial has continued. ROCSS shows that trainee research networks can efficiently develop and deliver complex interventional surgical trials. Colorectal Disease © 2016 The Association of Coloproctology of Great Britain and Ireland.
A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components
NASA Astrophysics Data System (ADS)
Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa
2016-10-01
Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.
NASA Astrophysics Data System (ADS)
Fu, S.-P.; Peng, Z.; Yuan, H.; Kfoury, R.; Young, Y.-N.
2017-01-01
Lipid bilayer membranes have been extensively studied by coarse-grained molecular dynamics simulations. Numerical efficiencies have been reported in the cases of aggressive coarse-graining, where several lipids are coarse-grained into a particle of size 4 ∼ 6 nm so that there is only one particle in the thickness direction. Yuan et al. proposed a pair-potential between these one-particle-thick coarse-grained lipid particles to capture the mechanical properties of a lipid bilayer membrane, such as gel-fluid-gas phase transitions of lipids, diffusion, and bending rigidity Yuan et al. (2010). In this work we implement such an interaction potential in LAMMPS to simulate large-scale lipid systems such as a giant unilamellar vesicle (GUV) and red blood cells (RBCs). We also consider the effect of cytoskeleton on the lipid membrane dynamics as a model for RBC dynamics, and incorporate coarse-grained water molecules to account for hydrodynamic interactions. The interaction between the coarse-grained water molecules (explicit solvent molecules) is modeled as a Lennard-Jones (L-J) potential. To demonstrate that the proposed methods do capture the observed dynamics of vesicles and RBCs, we focus on two sets of LAMMPS simulations: 1. Vesicle shape transitions with enclosed volume; 2. RBC shape transitions with different enclosed volume. Finally utilizing the parallel computing capability in LAMMPS, we provide some timing results for parallel coarse-grained simulations to illustrate that it is possible to use LAMMPS to simulate large-scale realistic complex biological membranes for more than 1 ms.
Mesoscale Fracture Analysis of Multiphase Cementitious Composites Using Peridynamics
Yaghoobi, Amin; Chorzepa, Mi G.; Kim, S. Sonny; Durham, Stephan A.
2017-01-01
Concrete is a complex heterogeneous material, and thus, it is important to develop numerical modeling methods to enhance the prediction accuracy of the fracture mechanism. In this study, a two-dimensional mesoscale model is developed using a non-ordinary state-based peridynamic (NOSBPD) method. Fracture in a concrete cube specimen subjected to pure tension is studied. The presence of heterogeneous materials consisting of coarse aggregates, interfacial transition zones, air voids and cementitious matrix is characterized as particle points in a two-dimensional mesoscale model. Coarse aggregates and voids are generated using uniform probability distributions, while a statistical study is provided to comprise the effect of random distributions of constituent materials. In obtaining the steady-state response, an incremental and iterative solver is adopted for the dynamic relaxation method. Load-displacement curves and damage patterns are compared with available experimental and finite element analysis (FEA) results. Although the proposed model uses much simpler material damage models and discretization schemes, the load-displacement curves show no difference from the FEA results. Furthermore, no mesh refinement is necessary, as fracture is inherently characterized by bond breakages. Finally, a sensitivity study is conducted to understand the effect of aggregate volume fraction and porosity on the load capacity of the proposed mesoscale model. PMID:28772518
A dispersion minimizing scheme for the 3-D Helmholtz equation based on ray theory
NASA Astrophysics Data System (ADS)
Stolk, Christiaan C.
2016-06-01
We develop a new dispersion minimizing compact finite difference scheme for the Helmholtz equation in 2 and 3 dimensions. The scheme is based on a newly developed ray theory for difference equations. A discrete Helmholtz operator and a discrete operator to be applied to the source and the wavefields are constructed. Their coefficients are piecewise polynomial functions of hk, chosen such that phase and amplitude errors are minimal. The phase errors of the scheme are very small, approximately as small as those of the 2-D quasi-stabilized FEM method and substantially smaller than those of alternatives in 3-D, assuming the same number of gridpoints per wavelength is used. In numerical experiments, accurate solutions are obtained in constant and smoothly varying media using meshes with only five to six points per wavelength and wave propagation over hundreds of wavelengths. When used as a coarse level discretization in a multigrid method the scheme can even be used with down to three points per wavelength. Tests on 3-D examples with up to 108 degrees of freedom show that with a recently developed hybrid solver, the use of coarser meshes can lead to corresponding savings in computation time, resulting in good simulation times compared to the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jie; Ni, Ming-Jiu, E-mail: mjni@ucas.ac.cn
2014-01-01
The numerical simulation of Magnetohydrodynamics (MHD) flows with complex boundaries has been a topic of great interest in the development of a fusion reactor blanket for the difficulty to accurately simulate the Hartmann layers and side layers along arbitrary geometries. An adaptive version of a consistent and conservative scheme has been developed for simulating the MHD flows. Besides, the present study forms the first attempt to apply the cut-cell approach for irregular wall-bounded MHD flows, which is more flexible and conveniently implemented under adaptive mesh refinement (AMR) technique. It employs a Volume-of-Fluid (VOF) approach to represent the fluid–conducting wall interfacemore » that makes it possible to solve the fluid–solid coupling magnetic problems, emphasizing at how electric field solver is implemented when conductivity is discontinuous in cut-cell. For the irregular cut-cells, the conservative interpolation technique is applied to calculate the Lorentz force at cell-center. On the other hand, it will be shown how consistent and conservative scheme is implemented on fine/coarse mesh boundaries when using AMR technique. Then, the applied numerical schemes are validated by five test simulations and excellent agreement was obtained for all the cases considered, simultaneously showed good consistency and conservative properties.« less
Exploratory High-Fidelity Aerostructural Optimization Using an Efficient Monolithic Solution Method
NASA Astrophysics Data System (ADS)
Zhang, Jenmy Zimi
This thesis is motivated by the desire to discover fuel efficient aircraft concepts through exploratory design. An optimization methodology based on tightly integrated high-fidelity aerostructural analysis is proposed, which has the flexibility, robustness, and efficiency to contribute to this goal. The present aerostructural optimization methodology uses an integrated geometry parameterization and mesh movement strategy, which was initially proposed for aerodynamic shape optimization. This integrated approach provides the optimizer with a large amount of geometric freedom for conducting exploratory design, while allowing for efficient and robust mesh movement in the presence of substantial shape changes. In extending this approach to aerostructural optimization, this thesis has addressed a number of important challenges. A structural mesh deformation strategy has been introduced to translate consistently the shape changes described by the geometry parameterization to the structural model. A three-field formulation of the discrete steady aerostructural residual couples the mesh movement equations with the three-dimensional Euler equations and a linear structural analysis. Gradients needed for optimization are computed with a three-field coupled adjoint approach. A number of investigations have been conducted to demonstrate the suitability and accuracy of the present methodology for use in aerostructural optimization involving substantial shape changes. Robustness and efficiency in the coupled solution algorithms is crucial to the success of an exploratory optimization. This thesis therefore also focuses on the design of an effective monolithic solution algorithm for the proposed methodology. This involves using a Newton-Krylov method for the aerostructural analysis and a preconditioned Krylov subspace method for the coupled adjoint solution. Several aspects of the monolithic solution method have been investigated. These include appropriate strategies for scaling and matrix-vector product evaluation, as well as block preconditioning techniques that preserve the modularity between subproblems. The monolithic solution method is applied to problems with varying degrees of fluid-structural coupling, as well as a wing span optimization study. The monolithic solution algorithm typically requires 20%-70% less computing time than its partitioned counterpart. This advantage increases with increasing wing flexibility. The performance of the monolithic solution method is also much less sensitive to the choice of the solution parameter.
Fonseca, T C Ferreira; Bogaerts, R; Lebacq, A L; Mihailescu, C L; Vanhavere, F
2014-04-01
A realistic computational 3D human body library, called MaMP and FeMP (Male and Female Mesh Phantoms), based on polygonal mesh surface geometry, has been created to be used for numerical calibration of the whole body counter (WBC) system of the nuclear power plant (NPP) in Doel, Belgium. The main objective was to create flexible computational models varying in gender, body height, and mass for studying the morphology-induced variation of the detector counting efficiency (CE) and reducing the measurement uncertainties. First, the counting room and an HPGe detector were modeled using MCNPX (Monte Carlo radiation transport code). The validation of the model was carried out for different sample-detector geometries with point sources and a physical phantom. Second, CE values were calculated for a total of 36 different mesh phantoms in a seated position using the validated Monte Carlo model. This paper reports on the validation process of the in vivo whole body system and the CE calculated for different body heights and weights. The results reveal that the CE is strongly dependent on the individual body shape, size, and gender and may vary by a factor of 1.5 to 3 depending on the morphology aspects of the individual to be measured.
Semi-automatic sparse preconditioners for high-order finite element methods on non-uniform meshes
NASA Astrophysics Data System (ADS)
Austin, Travis M.; Brezina, Marian; Jamroz, Ben; Jhurani, Chetan; Manteuffel, Thomas A.; Ruge, John
2012-05-01
High-order finite elements often have a higher accuracy per degree of freedom than the classical low-order finite elements. However, in the context of implicit time-stepping methods, high-order finite elements present challenges to the construction of efficient simulations due to the high cost of inverting the denser finite element matrix. There are many cases where simulations are limited by the memory required to store the matrix and/or the algorithmic components of the linear solver. We are particularly interested in preconditioned Krylov methods for linear systems generated by discretization of elliptic partial differential equations with high-order finite elements. Using a preconditioner like Algebraic Multigrid can be costly in terms of memory due to the need to store matrix information at the various levels. We present a novel method for defining a preconditioner for systems generated by high-order finite elements that is based on a much sparser system than the original high-order finite element system. We investigate the performance for non-uniform meshes on a cube and a cubed sphere mesh, showing that the sparser preconditioner is more efficient and uses significantly less memory. Finally, we explore new methods to construct the sparse preconditioner and examine their effectiveness for non-uniform meshes. We compare results to a direct use of Algebraic Multigrid as a preconditioner and to a two-level additive Schwarz method.
Yin, Kai; Du, Haifeng; Dong, Xinran; Wang, Cong; Duan, Ji-An; He, Jun
2017-10-05
Fog collection is receiving increasing attention for providing water in semi-arid deserts and inland areas. Inspired by the fog harvesting ability of the hydrophobic-hydrophilic surface of Namib desert beetles, we present a simple, low-cost method to prepare a hybrid superhydrophobic-hydrophilic surface. The surface contains micro/nanopatterns, and is prepared by incorporating femtosecond-laser fabricated polytetrafluoroethylene nanoparticles deposited on superhydrophobic copper mesh with a pristine hydrophilic copper sheet. The as-prepared surface exhibits enhanced fog collection efficiency compared with uniform (super)hydrophobic or (super)hydrophilic surfaces. This enhancement can be tuned by controlling the mesh number, inclination angle, and fabrication structure. Moreover, the surface shows excellent anti-corrosion ability after immersing in 1 M HCl, 1 M NaOH, and 10 wt% NaCl solutions for 2 hours. This work may provide insight into fabricating hybrid superhydrophobic-hydrophilic surfaces for efficient atmospheric water collection.
MOAB : a mesh-oriented database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tautges, Timothy James; Ernst, Corey; Stimpson, Clint
A finite element mesh is used to decompose a continuous domain into a discretized representation. The finite element method solves PDEs on this mesh by modeling complex functions as a set of simple basis functions with coefficients at mesh vertices and prescribed continuity between elements. The mesh is one of the fundamental types of data linking the various tools in the FEA process (mesh generation, analysis, visualization, etc.). Thus, the representation of mesh data and operations on those data play a very important role in FEA-based simulations. MOAB is a component for representing and evaluating mesh data. MOAB can storemore » structured and unstructured mesh, consisting of elements in the finite element 'zoo'. The functional interface to MOAB is simple yet powerful, allowing the representation of many types of metadata commonly found on the mesh. MOAB is optimized for efficiency in space and time, based on access to mesh in chunks rather than through individual entities, while also versatile enough to support individual entity access. The MOAB data model consists of a mesh interface instance, mesh entities (vertices and elements), sets, and tags. Entities are addressed through handles rather than pointers, to allow the underlying representation of an entity to change without changing the handle to that entity. Sets are arbitrary groupings of mesh entities and other sets. Sets also support parent/child relationships as a relation distinct from sets containing other sets. The directed-graph provided by set parent/child relationships is useful for modeling topological relations from a geometric model or other metadata. Tags are named data which can be assigned to the mesh as a whole, individual entities, or sets. Tags are a mechanism for attaching data to individual entities and sets are a mechanism for describing relations between entities; the combination of these two mechanisms is a powerful yet simple interface for representing metadata or application-specific data. For example, sets and tags can be used together to describe geometric topology, boundary condition, and inter-processor interface groupings in a mesh. MOAB is used in several ways in various applications. MOAB serves as the underlying mesh data representation in the VERDE mesh verification code. MOAB can also be used as a mesh input mechanism, using mesh readers included with MOAB, or as a translator between mesh formats, using readers and writers included with MOAB. The remainder of this report is organized as follows. Section 2, 'Getting Started', provides a few simple examples of using MOAB to perform simple tasks on a mesh. Section 3 discusses the MOAB data model in more detail, including some aspects of the implementation. Section 4 summarizes the MOAB function API. Section 5 describes some of the tools included with MOAB, and the implementation of mesh readers/writers for MOAB. Section 6 contains a brief description of MOAB's relation to the TSTT mesh interface. Section 7 gives a conclusion and future plans for MOAB development. Section 8 gives references cited in this report. A reference description of the full MOAB API is contained in Section 9.« less
Kinetic solvers with adaptive mesh in phase space
NASA Astrophysics Data System (ADS)
Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.
Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T
2013-01-01
Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.
NASA Astrophysics Data System (ADS)
Wu, Shijia; He, Weihua; Yang, Wulin; Ye, Yaoli; Huang, Xia; Logan, Bruce E.
2017-07-01
Microbial fuel cells (MFCs) need to have a compact architecture, but power generation using low strength domestic wastewater is unstable for closely-spaced electrode designs using thin anodes (flat mesh or small diameter graphite fiber brushes) due to oxygen crossover from the cathode. A composite anode configuration was developed to improve performance, by joining the mesh and brushes together, with the mesh used to block oxygen crossover to the brushes, and the brushes used to stabilize mesh potentials. In small, fed-batch MFCs (28 mL), the composite anode produced 20% higher power densities than MFCs using only brushes, and 150% power densities compared to carbon mesh anodes. In continuous flow tests at short hydraulic retention times (HRTs, 2 or 4 h) using larger MFCs (100 mL), composite anodes had stable performance, while brush anode MFCs exhibited power overshoot in polarization tests. Both configurations exhibited power overshoot at a longer HRT of 8 h due to lower effluent CODs. The use of composite anodes reduced biomass growth on the cathode (1.9 ± 0.2 mg) compared to only brushes (3.1 ± 0.3 mg), and increased coulombic efficiencies, demonstrating that they successfully reduced oxygen contamination of the anode and the bio-fouling of cathode.
Kinetic solvers with adaptive mesh in phase space.
Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
Influence of reinforcement mesh configuration for improvement of concrete durability
NASA Astrophysics Data System (ADS)
Pan, Chong-gen; Jin, Wei-liang; Mao, Jiang-hong; Zhang, Hua; Sun, Li-hao; Wei, Dong
2017-10-01
Steel bar in concrete structures under harsh environmental conditions, such as chlorine corrosion, seriously affects its service life. Bidirectional electromigration rehabilitation (BIEM) is a new method of repair technology for reinforced concrete structures in such chloride corrosion environments. By applying the BIEM, chloride ions can be removed from the concrete and the migrating corrosion inhibit can be moved to the steel surface. In conventional engineering, the concrete structure is often configured with a multi-layer steel mesh. However, the effect of the BIEM in such structures has not yet been investigated. In this paper, the relevant simulation test is carried out to study the migration law of chloride ions and the migrating corrosion inhibitor in a concrete specimen with complex steel mesh under different energizing modes. The results show that the efficiency of the BIEM increases 50% in both the monolayer steel mesh and the double-layer steel mesh. By using the single-sided BIEM, 87% of the chloride ions are removed from the steel surface. The different step modes can affect the chloride ion removal. The chloride ions within the range of the reinforcement protective cover are easier to be removed than those in the concrete between the two layers of steel mesh. However, the amount of migrating corrosion inhibitor is larger in the latter circumstances.
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Efficient low-bit-rate adaptive mesh-based motion compensation technique
NASA Astrophysics Data System (ADS)
Mahmoud, Hanan A.; Bayoumi, Magdy A.
2001-08-01
This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).
Zhang, Zhi-Hui; Wang, Hu-Jun; Liang, Yun-Hong; Li, Xiu-Juan; Ren, Lu-Quan; Cui, Zhen-Quan; Luo, Cheng
2018-03-01
Superhydrophobic surfaces have great potential for application in self-cleaning and oil/water separation. However, the large-scale practical applications of superhydrophobic coating surfaces are impeded by many factors, such as complicated fabrication processes, the use of fluorinated reagents and noxious organic solvents and poor mechanical stability. Herein, we describe the successful preparation of a fluorine-free multifunctional coating without noxious organic solvents that was brushed, dipped or sprayed onto glass slides and stainless-steel meshes as substrates. The obtained multifunctional superhydrophobic and superoleophilic surfaces (MSHOs) demonstrated self-cleaning abilities even when contaminated with or immersed in oil. The superhydrophobic surfaces were robust and maintained their water repellency after being scratched with a knife or abraded with sandpaper for 50 cycles. In addition, stainless-steel meshes sprayed with the coating quickly separated various oil/water mixtures with a high separation efficiency (>93%). Furthermore, the coated mesh maintained a high separation efficiency above 95% over 20 cycles of separation. This simple and effective strategy will inspire the large-scale fabrication of multifunctional surfaces for practical applications in self-cleaning and oil/water separation.
Dynamic subfilter-scale stress model for large-eddy simulations
NASA Astrophysics Data System (ADS)
Rouhi, A.; Piomelli, U.; Geurts, B. J.
2016-08-01
We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.
An Approach to Quad Meshing Based On Cross Valued Maps and the Ginzburg-Landau Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viertel, Ryan; Osting, Braxton
2017-08-01
A generalization of vector fields, referred to as N-direction fields or cross fields when N=4, has been recently introduced and studied for geometry processing, with applications in quadrilateral (quad) meshing, texture mapping, and parameterization. We make the observation that cross field design for two-dimensional quad meshing is related to the well-known Ginzburg-Landau problem from mathematical physics. This identification yields a variety of theoretical tools for efficiently computing boundary-aligned quad meshes, with provable guarantees on the resulting mesh, for example, the number of mesh defects and bounds on the defect locations. The procedure for generating the quad mesh is to (i)more » find a complex-valued "representation" field that minimizes the Dirichlet energy subject to a boundary constraint, (ii) convert the representation field into a boundary-aligned, smooth cross field, (iii) use separatrices of the cross field to partition the domain into four sided regions, and (iv) mesh each of these four-sided regions using standard techniques. Under certain assumptions on the geometry of the domain, we prove that this procedure can be used to produce a cross field whose separatrices partition the domain into four sided regions. To solve the energy minimization problem for the representation field, we use an extension of the Merriman-Bence-Osher (MBO) threshold dynamics method, originally conceived as an algorithm to simulate motion by mean curvature, to minimize the Ginzburg-Landau energy for the optimal representation field. Lastly, we demonstrate the method on a variety of test domains.« less
NASA Astrophysics Data System (ADS)
Al-Chalabi, Rifat M. Khalil
1997-09-01
Development of an improvement to the computational efficiency of the existing nested iterative solution strategy of the Nodal Exapansion Method (NEM) nodal based neutron diffusion code NESTLE is presented. The improvement in the solution strategy is the result of developing a multilevel acceleration scheme that does not suffer from the numerical stalling associated with a number of iterative solution methods. The acceleration scheme is based on the multigrid method, which is specifically adapted for incorporation into the NEM nonlinear iterative strategy. This scheme optimizes the computational interplay between the spatial discretization and the NEM nonlinear iterative solution process through the use of the multigrid method. The combination of the NEM nodal method, calculation of the homogenized, neutron nodal balance coefficients (i.e. restriction operator), efficient underlying smoothing algorithm (power method of NESTLE), and the finer mesh reconstruction algorithm (i.e. prolongation operator), all operating on a sequence of coarser spatial nodes, constitutes the multilevel acceleration scheme employed in this research. Two implementations of the multigrid method into the NESTLE code were examined; the Imbedded NEM Strategy and the Imbedded CMFD Strategy. The main difference in implementation between the two methods is that in the Imbedded NEM Strategy, the NEM solution is required at every MG level. Numerical tests have shown that the Imbedded NEM Strategy suffers from divergence at coarse- grid levels, hence all the results for the different benchmarks presented here were obtained using the Imbedded CMFD Strategy. The novelties in the developed MG method are as follows: the formulation of the restriction and prolongation operators, and the selection of the relaxation method. The restriction operator utilizes a variation of the reactor physics, consistent homogenization technique. The prolongation operator is based upon a variant of the pin power reconstruction methodology. The relaxation method, which is the power method, utilizes a constant coefficient matrix within the NEM non-linear iterative strategy. The choice of the MG nesting within the nested iterative strategy enables the incorporation of other non-linear effects with no additional coding effort. In addition, if an eigenvalue problem is being solved, it remains an eigenvalue problem at all grid levels, simplifying coding implementation. The merit of the developed MG method was tested by incorporating it into the NESTLE iterative solver, and employing it to solve four different benchmark problems. In addition to the base cases, three different sensitivity studies are performed, examining the effects of number of MG levels, homogenized coupling coefficients correction (i.e. restriction operator), and fine-mesh reconstruction algorithm (i.e. prolongation operator). The multilevel acceleration scheme developed in this research provides the foundation for developing adaptive multilevel acceleration methods for steady-state and transient NEM nodal neutron diffusion equations. (Abstract shortened by UMI.)
Hower, J.C.; Trimble, A.S.; Eble, C.F.; Palmer, C.A.; Kolker, A.
1999-01-01
Fly ash samples were collected in November and December of 1994, from generating units at a Kentucky power station using high- and low-sulfur feed coals. The samples are part of a two-year study of the coal and coal combustion byproducts from the power station. The ashes were wet screened at 100, 200, 325, and 500 mesh (150, 75, 42, and 25 ??m, respectively). The size fractions were then dried, weighed, split for petrographic and chemical analysis, and analyzed for ash yield and carbon content. The low-sulfur "heavy side" and "light side" ashes each have a similar size distribution in the November samples. In contrast, the December fly ashes showed the trend observed in later months, the light-side ash being finer (over 20 % more ash in the -500 mesh [-25 ??m] fraction) than the heavy-side ash. Carbon tended to be concentrated in the coarse fractions in the December samples. The dominance of the -325 mesh (-42 ??m) fractions in the overall size analysis implies, though, that carbon in the fine sizes may be an important consideration in the utilization of the fly ash. Element partitioning follows several patterns. Volatile elements, such as Zn and As, are enriched in the finer sizes, particularly in fly ashes collected at cooler, light-side electrostatic precipitator (ESP) temperatures. The latter trend is a function of precipitation at the cooler-ESP temperatures and of increasing concentration with the increased surface area of the finest fraction. Mercury concentrations are higher in high-carbon fly ashes, suggesting Hg adsorption on the fly ash carbon. Ni and Cr are associated, in part, with the spinel minerals in the fly ash. Copyright ?? 1999 Taylor & Francis.
Emperador, Agustí; Sfriso, Pedro; Villarreal, Marcos Ariel; Gelpí, Josep Lluis; Orozco, Modesto
2015-12-08
Molecular dynamics simulations of proteins are usually performed on a single molecule, and coarse-grained protein models are calibrated using single-molecule simulations, therefore ignoring intermolecular interactions. We present here a new coarse-grained force field for the study of many protein systems. The force field, which is implemented in the context of the discrete molecular dynamics algorithm, is able to reproduce the properties of folded and unfolded proteins, in both isolation, complexed forming well-defined quaternary structures, or aggregated, thanks to its proper evaluation of protein-protein interactions. The accuracy and computational efficiency of the method makes it a universal tool for the study of the structure, dynamics, and association/dissociation of proteins.
Zhan, Yijian; Meschke, Günther
2017-07-08
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense.
Zhan, Yijian
2017-01-01
The effective analysis of the nonlinear behavior of cement-based engineering structures not only demands physically-reliable models, but also computationally-efficient algorithms. Based on a continuum interface element formulation that is suitable to capture complex cracking phenomena in concrete materials and structures, an adaptive mesh processing technique is proposed for computational simulations of plain and fiber-reinforced concrete structures to progressively disintegrate the initial finite element mesh and to add degenerated solid elements into the interfacial gaps. In comparison with the implementation where the entire mesh is processed prior to the computation, the proposed adaptive cracking model allows simulating the failure behavior of plain and fiber-reinforced concrete structures with remarkably reduced computational expense. PMID:28773130
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
HARP: A Dynamic Inertial Spectral Partitioner
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Sohn, Andrew; Biswas, Rupak
1997-01-01
Partitioning unstructured graphs is central to the parallel solution of computational science and engineering problems. Spectral partitioners, such recursive spectral bisection (RSB), have proven effecfive in generating high-quality partitions of realistically-sized meshes. The major problem which hindered their wide-spread use was their long execution times. This paper presents a new inertial spectral partitioner, called HARP. The main objective of the proposed approach is to quickly partition the meshes at runtime in a manner that works efficiently for real applications in the context of distributed-memory machines. The underlying principle of HARP is to find the eigenvectors of the unpartitioned vertices and then project them onto the eigerivectors of the original mesh. Results for various meshes ranging in size from 1000 to 100,000 vertices indicate that HARP can indeed partition meshes rapidly at runtime. Experimental results show that our largest mesh can be partitioned sequentially in only a few seconds on an SP2 which is several times faster than other spectral partitioners while maintaining the solution quality of the proven RSB method. A parallel WI version of HARP has also been implemented on IBM SP2 and Cray T3E. Parallel HARP, running on 64 processors SP2 and T3E, can partition a mesh containing more than 100,000 vertices into 64 subgrids in about half a second. These results indicate that graph partitioning can now be truly embedded in dynamically-changing real-world applications.
Burner liner thermal-structural load modeling
NASA Technical Reports Server (NTRS)
Maffeo, R.
1986-01-01
The software package Transfer Analysis Code to Interface Thermal/Structural Problems (TRANCITS) was developed. The TRANCITS code is used to interface temperature data between thermal and structural analytical models. The use of this transfer module allows the heat transfer analyst to select the thermal mesh density and thermal analysis code best suited to solve the thermal problem and gives the same freedoms to the stress analyst, without the efficiency penalties associated with common meshes and the accuracy penalties associated with the manual transfer of thermal data.
Gamma motes for detection of radioactive materials in shipping containers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harold McHugh; William Quam; Stephan Weeks
Shipping containers can be effectively monitored for radiological materials using gamma (and neutron) motes in distributed mesh networks. The mote platform is ideal for collecting data for integration into operational management systems required for efficiently and transparently monitoring international trade. Significant reductions in size and power requirements have been achieved for room-temperature cadmium zinc telluride (CZT) gamma detectors. Miniaturization of radio modules and microcontroller units are paving the way for low-power, deeply-embedded, wireless sensor distributed mesh networks.
Wang, Dongbin; Shafer, Martin M; Schauer, James J; Sioutas, Constantinos
2015-04-01
This study presents a novel system for online, field measurement of copper (Cu) in ambient coarse (2.5-10 μm) particulate matter (PM). This new system utilizes two virtual impactors combined with a modified liquid impinger (BioSampler) to collect coarse PM directly as concentrated slurry samples. The total and water-soluble Cu concentrations are subsequently measured by a copper Ion Selective Electrode (ISE). Laboratory evaluation results indicated excellent collection efficiency (over 85%) for particles in the coarse PM size ranges. In the field evaluations, very good agreements for both total and water-soluble Cu concentrations were obtained between online ISE-based monitor measurements and those analyzed by means of inductively coupled plasma mass spectrometry (ICP-MS). Moreover, the field tests indicated that the Cu monitor could achieve near-continuous operation for at least 6 consecutive days (a time resolution of 2-4 h) without obvious shortcomings. Copyright © 2015 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Zhen; Voth, Gregory A., E-mail: gavoth@uchicago.edu
It is essential to be able to systematically construct coarse-grained (CG) models that can efficiently and accurately reproduce key properties of higher-resolution models such as all-atom. To fulfill this goal, a mapping operator is needed to transform the higher-resolution configuration to a CG configuration. Certain mapping operators, however, may lose information related to the underlying electrostatic properties. In this paper, a new mapping operator based on the centers of charge of CG sites is proposed to address this issue. Four example systems are chosen to demonstrate this concept. Within the multiscale coarse-graining framework, CG models that use this mapping operatormore » are found to better reproduce the structural correlations of atomistic models. The present work also demonstrates the flexibility of the mapping operator and the robustness of the force matching method. For instance, important functional groups can be isolated and emphasized in the CG model.« less
Unstructured Mesh Methods for the Simulation of Hypersonic Flows
NASA Technical Reports Server (NTRS)
Peraire, Jaime; Bibb, K. L. (Technical Monitor)
2001-01-01
This report describes the research work undertaken at the Massachusetts Institute of Technology. The aim of this research is to identify effective algorithms and methodologies for the efficient and routine solution of hypersonic viscous flows about re-entry vehicles. For over ten years we have received support from NASA to develop unstructured mesh methods for Computational Fluid Dynamics. As a result of this effort a methodology based on the use, of unstructured adapted meshes of tetrahedra and finite volume flow solvers has been developed. A number of gridding algorithms flow solvers, and adaptive strategies have been proposed. The most successful algorithms developed from the basis of the unstructured mesh system FELISA. The FELISA system has been extensively for the analysis of transonic and hypersonic flows about complete vehicle configurations. The system is highly automatic and allows for the routine aerodynamic analysis of complex configurations starting from CAD data. The code has been parallelized and utilizes efficient solution algorithms. For hypersonic flows, a version of the, code which incorporates real gas effects, has been produced. One of the latest developments before the start of this grant was to extend the system to include viscous effects. This required the development of viscous generators, capable of generating the anisotropic grids required to represent boundary layers, and viscous flow solvers. In figures I and 2, we show some sample hypersonic viscous computations using the developed viscous generators and solvers. Although these initial results were encouraging, it became apparent that in order to develop a fully functional capability for viscous flows, several advances in gridding, solution accuracy, robustness and efficiency were required. As part of this research we have developed: 1) automatic meshing techniques and the corresponding computer codes have been delivered to NASA and implemented into the GridEx system, 2) a finite element algorithm for the solution of the viscous compressible flow equations which can solve flows all the way down to the incompressible limit and that can use higher order (quadratic) approximations leading to highly accurate answers, and 3) and iterative algebraic multigrid solution techniques.
A geostatistical approach to estimate mining efficiency indicators with flexible meshes
NASA Astrophysics Data System (ADS)
Freixas, Genis; Garriga, David; Fernàndez-Garcia, Daniel; Sanchez-Vila, Xavier
2014-05-01
Geostatistics is a branch of statistics developed originally to predict probability distributions of ore grades for mining operations by considering the attributes of a geological formation at unknown locations as a set of correlated random variables. Mining exploitations typically aim to maintain acceptable mineral laws to produce commercial products based upon demand. In this context, we present a new geostatistical methodology to estimate strategic efficiency maps that incorporate hydraulic test data, the evolution of concentrations with time obtained from chemical analysis (packer tests and production wells) as well as hydraulic head variations. The methodology is applied to a salt basin in South America. The exploitation is based on the extraction of brines through vertical and horizontal wells. Thereafter, brines are precipitated in evaporation ponds to obtain target potassium and magnesium salts of economic interest. Lithium carbonate is obtained as a byproduct of the production of potassium chloride. Aside from providing an assemble of traditional geostatistical methods, the strength of this study falls with the new methodology developed, which focus on finding the best sites to exploit the brines while maintaining efficiency criteria. Thus, some strategic indicator efficiency maps have been developed under the specific criteria imposed by exploitation standards to incorporate new extraction wells in new areas that would allow maintain or improve production. Results show that the uncertainty quantification of the efficiency plays a dominant role and that the use flexible meshes, which properly describe the curvilinear features associated with vertical stratification, provides a more consistent estimation of the geological processes. Moreover, we demonstrate that the vertical correlation structure at the given salt basin is essentially linked to variations in the formation thickness, which calls for flexible meshes and non-stationarity stochastic processes.
NASA Astrophysics Data System (ADS)
Khouider, B.; Majda, A.; Deng, Q.; Ravindran, A. M.
2015-12-01
Global climate models (GCMs) are large computer codes based on the discretization of the equations of atmospheric and oceanic motions coupled to various processes of transfer of heat, moisture and other constituents between land, atmosphere, and oceans. Because of computing power limitations, typical GCM grid resolution is on the order of 100 km and the effects of many physical processes, occurring on smaller scales, on the climate system are represented through various closure recipes known as parameterizations. The parameterization of convective motions and many processes associated with cumulus clouds such as the exchange of latent heat and cloud radiative forcing are believed to be behind much of uncertainty in GCMs. Based on a lattice particle interacting system, the stochastic multicloud model (SMCM) provide a novel and efficient representation of the unresolved variability in GCMs due to organized tropical convection and the cloud cover. It is widely recognized that stratiform heating contributes significantly to tropical rainfall and to the dynamics of tropical convective systems by inducing a front-to-rear tilt in the heating profile. Stratiform anvils forming in the wake of deep convection play a central role in the dynamics of tropical mesoscale convective systems. Here, aquaplanet simulations with a warm pool like surface forcing, based on a coarse-resolution GCM , of ˜170 km grid mesh, coupled with SMCM, are used to demonstrate the importance of stratiform heating for the organization of convection on planetary and intraseasonal scales. When some key model parameters are set to produce higher stratiform heating fractions, the model produces low-frequency and planetary-scale Madden Julian oscillation (MJO)-like wave disturbances while lower to moderate stratiform heating fractions yield mainly synoptic-scale convectively coupled Kelvin-like waves. Rooted from the stratiform instability, it is conjectured here that the strength and extent of stratiform downdrafts are key contributors to the scale selection of convective organizations perhaps with mechanisms that are in essence similar to those of mesoscale convective systems.
A coarse-to-fine kernel matching approach for mean-shift based visual tracking
NASA Astrophysics Data System (ADS)
Liangfu, L.; Zuren, F.; Weidong, C.; Ming, J.
2009-03-01
Mean shift is an efficient pattern match algorithm. It is widely used in visual tracking fields since it need not perform whole search in the image space. It employs gradient optimization method to reduce the time of feature matching and realize rapid object localization, and uses Bhattacharyya coefficient as the similarity measure between object template and candidate template. This thesis presents a mean shift algorithm based on coarse-to-fine search for the best kernel matching. This paper researches for object tracking with large motion area based on mean shift. To realize efficient tracking of such an object, we present a kernel matching method from coarseness to fine. If the motion areas of the object between two frames are very large and they are not overlapped in image space, then the traditional mean shift method can only obtain local optimal value by iterative computing in the old object window area, so the real tracking position cannot be obtained and the object tracking will be disabled. Our proposed algorithm can efficiently use a similarity measure function to realize the rough location of motion object, then use mean shift method to obtain the accurate local optimal value by iterative computing, which successfully realizes object tracking with large motion. Experimental results show its good performance in accuracy and speed when compared with background-weighted histogram algorithm in the literature.
Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester
2005-04-01
Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates (katt) that were determined by applying the model to the breakthrough data, filter factors (f) were calculated and compared with f values estimated from the slopes of log (cmax/co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log (cmax/co) m(-1), are consistently in the order of 10(-2) for clean coarse gravel aquifers, 10(-3) for contaminated coarse gravel aquifers, and generally 10(-1) for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10(4) pfu/l for enteroviruses and 10(6) cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency (alpha) from model-derived attachment rates (katt), and the results are compared with those reported in the literature. The calculated alpha values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size (d10) or the mean particle size (d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. ) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.
NASA Astrophysics Data System (ADS)
Pang, Liping; Close, Murray; Goltz, Mark; Noonan, Mike; Sinton, Lester
2005-04-01
Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates ( katt) that were determined by applying the model to the breakthrough data, filter factors ( f) were calculated and compared with f values estimated from the slopes of log ( cmax/ co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log ( cmax/ co) m -1, are consistently in the order of 10 -2 for clean coarse gravel aquifers, 10 -3 for contaminated coarse gravel aquifers, and generally 10 -1 for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10 4 pfu/l for enteroviruses and 10 6 cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency ( α) from model-derived attachment rates ( katt), and the results are compared with those reported in the literature. The calculated α values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size ( d10) or the mean particle size ( d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. DeBorde et al., 1999) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.
Selective separation of oil and water with mesh membranes by capillarity.
Yu, Yuanlie; Chen, Hua; Liu, Yun; Craig, Vincent S J; Lai, Zhiping
2016-09-01
The separation of oil and water from wastewater generated in the oil-production industries, as well as in frequent oil spillage events, is important in mitigating severe environmental and ecological damage. Additionally, a wide arrange of industrial processes require oils or fats to be removed from aqueous systems. The immiscibility of oil and water allows for the wettability of solid surfaces to be engineered to achieve the separation of oil and water through capillarity. Mesh membranes with extreme, selective wettability can efficiently remove oil or water from oil/water mixtures through a simple filtration process using gravity. A wide range of different types of mesh membranes have been successfully rendered with extreme wettability and applied to oil/water separation in the laboratory. These mesh materials have typically shown good durability, stability as well as reusability, which makes them promising candidates for an ever widening range of practical applications. Copyright © 2016 Elsevier B.V. All rights reserved.
Poly(ε-caprolactone) Microfiber Meshes for Repeated Oil Retrieval
Hersey, J. S.; Yohe, S. T.; Grinstaff, M. W.
2016-01-01
Electrospun non-woven poly(ε-caprolactone) (PCL) microfiber meshes are described as biodegradable, mechanically robust, and reusable polymeric oil sorbents capable of selectively retrieving oil from simulated oil spills in both fresh and seawater scenarios. Hydrophobic PCL meshes have >99.5% (oil over water) oil selectivity and oil absorption capacities of ~10 grams of oil per gram of sorbent material, which is shown to be a volumetrically driven process. Both the oil selectivity and absorption capacity remained constant over several oil absorption and vacuum assisted retrieval cycles when removing crude oil or mechanical pump oil from deionized water or simulated seawater mixtures. Finally, when challenged with surfactant stabilized water-in-oil emulsions, the PCL meshes continued to show selective oil absorption. These studies add to the knowledge base of synthetic oil sorbents highlighting a need for biodegradable synthetic oil sorbents which balance porosity and mechanical integrity enabling reuse, allowing for the efficient recovery of oil after an accidental oil spill. PMID:26989490
NASA Technical Reports Server (NTRS)
Ralph, E. L.; Linder, E. B.
1996-01-01
Solar panel designs that utilize new high-efficiency solar cells and lightweight rigid panel technologies are described. The resulting designs increase the specific power (W/kg) achievable in the near-term and are well suited to meet the demands of higher performance small satellites (smallsats). Advanced solar panel designs have been developed and demonstrated on two NASA SBIR contracts at Applied Solar. The first used 19% efficient, large area (5.5 cm x 6.5 cm) GaAs/Ge solar cells with a lightweight rigid graphite epoxy isogrid substrate configuration. A 1,445 cm(exp 2) coupon was fabricated and tested to demonstrate 60 W/kg with a high potential of achieving 80 W/kg. The second panel design used new 22% efficiency, dual junction GaInP2/GaAs/Ge solar cells combined with a lightweight aluminum core/graphite fiber mesh facesheet substrate. A 1,445 cm(exp 2) coupon was fabricated and tested to demonstrate 105 W/kg with the potential of achieving 115 W/kg. This paper will address the construction details for the GaAs/isogrid and dual-junction GaAs/carbon mesh panel configurations. These are ultimately sized to provide 75 Watts and 119 Watts respectively for smallsats or may be used as modular building blocks for larger systems. GaAs/isogrid and dual-junction GaAs/carbon mesh coupons have been fabricated and tested to successfully demonstrate critical performance parameters and results are also provided here.
Comparative efficiency of different methods of gluten extraction in indigenous varieties of wheat.
Imran, Samra; Hussain, Zaib; Ghafoor, Farkhanda; Nagra, Saeedahmad; Ziai, Naheeda Ashbeal
2013-06-01
The present study investigated six varieties of locally grown wheat (Lasani, Sehar, Miraj-08, Chakwal-50, Faisalabad-08 and Inqlab) procured from Punjab Seed Corporation, Lahore, Pakistan for their proximate contents. On the basis of protein content and ready availability, Faisalabad-08 (FD-08) was selected to be used for the assessment of comparative efficiency of various methods used for gluten extraction. Three methods, mechanical, chemical and microbiological were used for the extraction of gluten from FD-08. Each method was carried out under ambient conditions using a drying temperature of 55 degrees C. Mechanical method utilized four different processes viz:- dough process, dough batter process, batter process and ethanol washing process using standard 150 mesh. The starch thus obtained was analyzed for its proximate contents. Dough batter process proved to be the most efficient mechanical method and was further investigated using 200 and 300 mesh. Gluten content was determined using sandwich omega-gliadin enzyme-linked immunosorbent assay (ELISA).The results of dough batter process using 200 mesh indicated a starch product with gluten content of 678 ppm. Chemical method indicated high gluten content of more than 5000 ppm and the microbiological method reduced the gluten content from 2500 ppm to 398 ppm. From the results it was observed that no gluten extraction method is viable to produce starch which can fulfill the criteria of a gluten free product (20 ppm).
Liu, Peter X.; Lai, Pinhua; Xu, Shaoping; Zou, Yanni
2018-01-01
In the present work, the majority of implemented virtual surgery simulation systems have been based on either a mesh or meshless strategy with regard to soft tissue modelling. To take full advantage of the mesh and meshless models, a novel coupled soft tissue cutting model is proposed. Specifically, the reconstructed virtual soft tissue consists of two essential components. One is associated with surface mesh that is convenient for surface rendering and the other with internal meshless point elements that is used to calculate the force feedback during cutting. To combine two components in a seamless way, virtual points are introduced. During the simulation of cutting, the Bezier curve is used to characterize smooth and vivid incision on the surface mesh. At the same time, the deformation of internal soft tissue caused by cutting operation can be treated as displacements of the internal point elements. Furthermore, we discussed and proved the stability and convergence of the proposed approach theoretically. The real biomechanical tests verified the validity of the introduced model. And the simulation experiments show that the proposed approach offers high computational efficiency and good visual effect, enabling cutting of soft tissue with high stability. PMID:29850006
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less