Sample records for mesh superposition method

  1. A wave superposition method formulated in digital acoustic space

    NASA Astrophysics Data System (ADS)

    Hwang, Yong-Sin

    In this thesis, a new formulation of the Wave Superposition method is proposed wherein the conventional mesh approach is replaced by a simple 3-D digital work space that easily accommodates shape optimization for minimizing or maximizing radiation efficiency. As sound quality is in demand in almost all product designs and also because of fierce competition between product manufacturers, faster and accurate computational method for shape optimization is always desired. Because the conventional Wave Superposition method relies solely on mesh geometry, it cannot accommodate fast shape changes in the design stage of a consumer product or machinery, where many iterations of shape changes are required. Since the use of a mesh hinders easy shape changes, a new approach for representing geometry is introduced by constructing a uniform lattice in a 3-D digital work space. A voxel (a portmanteau, a new word made from combining the sound and meaning, of the words, volumetric and pixel) is essentially a volume element defined by the uniform lattice, and does not require separate connectivity information as a mesh element does. In the presented method, geometry is represented with voxels that can easily adapt to shape changes, therefore it is more suitable for shape optimization. The new method was validated by computing radiated sound power of structures of simple and complex geometries and complex mode shapes. It was shown that matching volume velocity is a key component to an accurate analysis. A sensitivity study showed that it required at least 6 elements per acoustic wavelength, and a complexity study showed a minimal reduction in computational time.

  2. Numerical evaluation of moiré pattern in touch sensor module with electrode mesh structure in oblique view

    NASA Astrophysics Data System (ADS)

    Pournoury, M.; Zamiri, A.; Kim, T. Y.; Yurlov, V.; Oh, K.

    2016-03-01

    Capacitive touch sensor screen with the metal materials has recently become qualified for substitution of ITO; however several obstacles still have to be solved. One of the most important issues is moiré phenomenon. The visibility problem of the metal-mesh, in touch sensor module (TSM) is numerically considered in this paper. Based on human eye contract sensitivity function (CSF), moiré pattern of TSM electrode mesh structure is simulated with MATLAB software for 8 inch screen display in oblique view. Standard deviation of the generated moiré by the superposition of electrode mesh and screen image is calculated to find the optimal parameters which provide the minimum moiré visibility. To create the screen pixel array and mesh electrode, rectangular function is used. The filtered image, in frequency domain, is obtained by multiplication of Fourier transform of the finite mesh pattern (product of screen pixel and mesh electrode) with the calculated CSF function for three different observer distances (L=200, 300 and 400 mm). It is observed that the discrepancy between analytical and numerical results is less than 0.6% for 400 mm viewer distance. Moreover, in the case of oblique view due to considering the thickness of the finite film between mesh electrodes and screen, different points of minimum standard deviation of moiré pattern are predicted compared to normal view.

  3. Tooth-meshing-harmonic static-transmission-error amplitudes of helical gears

    NASA Astrophysics Data System (ADS)

    Mark, William D.

    2018-01-01

    The static transmission errors of meshing gear pairs arise from deviations of loaded tooth working surfaces from equispaced perfect involute surfaces. Such deviations consist of tooth-pair elastic deformations and geometric deviations (modifications) of tooth working surfaces. To a very good approximation, the static-transmission-error tooth-meshing-harmonic amplitudes of helical gears are herein expressed by superposition of Fourier transforms of the quantities: (1) the combination of tooth-pair elastic deformations and geometric tooth-pair modifications and (2) fractional mesh-stiffness fluctuations, each quantity (1) and (2) expressed as a function of involute "roll distance." Normalization of the total roll-distance single-tooth contact span to unity allows tooth-meshing-harmonic amplitudes to be computed for different shapes of the above-described quantities (1) and (2). Tooth-meshing harmonics p = 1, 2, … are shown to occur at Fourier-transform harmonic values of Qp, p = 1, 2, …, where Q is the actual (total) contact ratio, thereby verifying its importance in minimizing transmission-error tooth-meshing-harmonic amplitudes. Two individual shapes and two series of shapes of the quantities (1) and (2) are chosen to illustrate a wide variety of shapes. In most cases representative of helical gears, tooth-meshing-harmonic values p = 1, 2, … are shown to occur in Fourier-transform harmonic regions governed by discontinuities arising from tooth-pair-contact initiation and termination, thereby showing the importance of minimizing such discontinuities. Plots and analytical expressions for all such Fourier transforms are presented, thereby illustrating the effects of various types of tooth-working-surface modifications and tooth-pair stiffnesses on transmission-error generation.

  4. Comparison of modal superposition methods for the analytical solution to moving load problems.

    DOT National Transportation Integrated Search

    1994-01-01

    The response of bridge structures to moving loads is investigated using modal superposition methods. Two distinct modal superposition methods are available: the modedisplacement method and the mode-acceleration method. While the mode-displacement met...

  5. Quantum mushroom billiards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, Alex H.; Betcke, Timo; School of Mathematics, University of Manchester, Manchester, M13 9PL

    2007-12-15

    We report the first large-scale statistical study of very high-lying eigenmodes (quantum states) of the mushroom billiard proposed by L. A. Bunimovich [Chaos 11, 802 (2001)]. The phase space of this mixed system is unusual in that it has a single regular region and a single chaotic region, and no KAM hierarchy. We verify Percival's conjecture to high accuracy (1.7%). We propose a model for dynamical tunneling and show that it predicts well the chaotic components of predominantly regular modes. Our model explains our observed density of such superpositions dying as E{sup -1/3} (E is the eigenvalue). We compare eigenvaluemore » spacing distributions against Random Matrix Theory expectations, using 16 000 odd modes (an order of magnitude more than any existing study). We outline new variants of mesh-free boundary collocation methods which enable us to achieve high accuracy and high mode numbers ({approx}10{sup 5}) orders of magnitude faster than with competing methods.« less

  6. Thermalization as an invisibility cloak for fragile quantum superpositions

    NASA Astrophysics Data System (ADS)

    Hahn, Walter; Fine, Boris V.

    2017-07-01

    We propose a method for protecting fragile quantum superpositions in many-particle systems from dephasing by external classical noise. We call superpositions "fragile" if dephasing occurs particularly fast, because the noise couples very differently to the superposed states. The method consists of letting a quantum superposition evolve under the internal thermalization dynamics of the system, followed by a time-reversal manipulation known as Loschmidt echo. The thermalization dynamics makes the superposed states almost indistinguishable during most of the above procedure. We validate the method by applying it to a cluster of spins ½.

  7. A modified homotopy perturbation method and the axial secular frequencies of a non-linear ion trap.

    PubMed

    Doroudi, Alireza

    2012-01-01

    In this paper, a modified version of the homotopy perturbation method, which has been applied to non-linear oscillations by V. Marinca, is used for calculation of axial secular frequencies of a non-linear ion trap with hexapole and octopole superpositions. The axial equation of ion motion in a rapidly oscillating field of an ion trap can be transformed to a Duffing-like equation. With only octopole superposition the resulted non-linear equation is symmetric; however, in the presence of hexapole and octopole superpositions, it is asymmetric. This modified homotopy perturbation method is used for solving the resulting non-linear equations. As a result, the ion secular frequencies as a function of non-linear field parameters are obtained. The calculated secular frequencies are compared with the results of the homotopy perturbation method and the exact results. With only hexapole superposition, the results of this paper and the homotopy perturbation method are the same and with hexapole and octopole superpositions, the results of this paper are much more closer to the exact results compared with the results of the homotopy perturbation method.

  8. Estimating Concentrations of Road-Salt Constituents in Highway-Runoff from Measurements of Specific Conductance

    USGS Publications Warehouse

    Granato, Gregory E.; Smith, Kirk P.

    1999-01-01

    Discrete or composite samples of highway runoff may not adequately represent in-storm water-quality fluctuations because continuous records of water stage, specific conductance, pH, and temperature of the runoff indicate that these properties fluctuate substantially during a storm. Continuous records of water-quality properties can be used to maximize the information obtained about the stormwater runoff system being studied and can provide the context needed to interpret analyses of water samples. Concentrations of the road-salt constituents calcium, sodium, and chloride in highway runoff were estimated from theoretical and empirical relations between specific conductance and the concentrations of these ions. These relations were examined using the analysis of 233 highwayrunoff samples collected from August 1988 through March 1995 at four highway-drainage monitoring stations along State Route 25 in southeastern Massachusetts. Theoretically, the specific conductance of a water sample is the sum of the individual conductances attributed to each ionic species in solution-the product of the concentrations of each ion in milliequivalents per liter (meq/L) multiplied by the equivalent ionic conductance at infinite dilution-thereby establishing the principle of superposition. Superposition provides an estimate of actual specific conductance that is within measurement error throughout the conductance range of many natural waters, with errors of less than ?5 percent below 1,000 microsiemens per centimeter (?S/cm) and ?10 percent between 1,000 and 4,000 ?S/cm if all major ionic constituents are accounted for. A semi-empirical method (adjusted superposition) was used to adjust for concentration effects-superposition-method prediction errors at high and low concentrations-and to relate measured specific conductance to that calculated using superposition. The adjusted superposition method, which was developed to interpret the State Route 25 highway-runoff records, accounts for contributions of constituents other than calcium, sodium, and chloride in dilute waters. The adjusted superposition method also accounts for the attenuation of each constituent's contribution to conductance as ionic strength increases. Use of the adjusted superposition method generally reduced predictive error to within measurement error throughout the range of specific conductance (from 37 to 51,500 ?S/cm) in the highway runoff samples. The effects of pH, temperature, and organic constituents on the relation between concentrations of dissolved constituents and measured specific conductance were examined but these properties did not substantially affect interpretation of the Route 25 data set. Predictive abilities of the adjusted superposition method were similar to results obtained by standard regression techniques, but the adjusted superposition method has several advantages. Adjusted superposition can be applied using available published data about the constituents in precipitation, highway runoff, and the deicing chemicals applied to a highway. This semi-empirical method can be used as a predictive and diagnostic tool before a substantial number of samples are collected, but the power of the regression method is based upon a large number of water-quality analyses that may be affected by a bias in the data.

  9. Optimal simultaneous superpositioning of multiple structures with missing data.

    PubMed

    Theobald, Douglas L; Steindel, Phillip A

    2012-08-01

    Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.

  10. Electrical Resistivity Tomography using a finite element based BFGS algorithm with algebraic multigrid preconditioning

    NASA Astrophysics Data System (ADS)

    Codd, A. L.; Gross, L.

    2018-03-01

    We present a new inversion method for Electrical Resistivity Tomography which, in contrast to established approaches, minimizes the cost function prior to finite element discretization for the unknown electric conductivity and electric potential. Minimization is performed with the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) in an appropriate function space. BFGS is self-preconditioning and avoids construction of the dense Hessian which is the major obstacle to solving large 3-D problems using parallel computers. In addition to the forward problem predicting the measurement from the injected current, the so-called adjoint problem also needs to be solved. For this problem a virtual current is injected through the measurement electrodes and an adjoint electric potential is obtained. The magnitude of the injected virtual current is equal to the misfit at the measurement electrodes. This new approach has the advantage that the solution process of the optimization problem remains independent to the meshes used for discretization and allows for mesh adaptation during inversion. Computation time is reduced by using superposition of pole loads for the forward and adjoint problems. A smoothed aggregation algebraic multigrid (AMG) preconditioned conjugate gradient is applied to construct the potentials for a given electric conductivity estimate and for constructing a first level BFGS preconditioner. Through the additional reuse of AMG operators and coarse grid solvers inversion time for large 3-D problems can be reduced further. We apply our new inversion method to synthetic survey data created by the resistivity profile representing the characteristics of subsurface fluid injection. We further test it on data obtained from a 2-D surface electrode survey on Heron Island, a small tropical island off the east coast of central Queensland, Australia.

  11. Numerical predictions and experiments for optimizing hidden corrosion detection in aircraft structures using Lamb modes.

    PubMed

    Terrien, N; Royer, D; Lepoutre, F; Déom, A

    2007-06-01

    To increase the sensitivity of Lamb waves to hidden corrosion in aircraft structures, a preliminary step is to understand the phenomena governing this interaction. A hybrid model combining a finite element approach and a modal decomposition method is used to investigate the interaction of Lamb modes with corrosion pits. The finite element mesh is used to describe the region surrounding the corrosion pits while the modal decomposition method permits to determine the waves reflected and transmitted by the damaged area. Simulations make easier the interpretation of some parts of the measured waveform corresponding to superposition of waves diffracted by the corroded area. Numerical results permit to extract significant information from the transmitted waveform and thus to optimize the signal processing for the detection of corrosion at an early stage. Now, we are able to detect corrosion pits down to 80-mum depth distributed randomly on a square centimeter of an aluminum plate. Moreover, thickness variations present on aircraft structures can be discriminated from a slightly corroded area. Finally, using this experimental setup, aircraft structures have been tested.

  12. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures

    PubMed Central

    Theobald, Douglas L.; Wuttke, Deborah S.

    2008-01-01

    Summary THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. PMID:16777907

  13. Optimal simultaneous superpositioning of multiple structures with missing data

    PubMed Central

    Theobald, Douglas L.; Steindel, Phillip A.

    2012-01-01

    Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369

  14. On the Use of Material-Dependent Damping in ANSYS for Mode Superposition Transient Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, J.; Wei, X.

    The mode superposition method is often used for dynamic analysis of complex structures, such as the seismic Category I structures in nuclear power plants, in place of the less efficient full method, which uses the full system matrices for calculation of the transient responses. In such applications, specification of material-dependent damping is usually desirable because complex structures can consist of multiple types of materials that may have different energy dissipation capabilities. A recent review of the ANSYS manual for several releases found that the use of material-dependent damping is not clearly explained for performing a mode superposition transient dynamic analysis.more » This paper includes several mode superposition transient dynamic analyses using different ways to specify damping in ANSYS, in order to determine how material-dependent damping can be specified conveniently in a mode superposition transient dynamic analysis.« less

  15. Numerical benchmarking of a Coarse-Mesh Transport (COMET) Method for medical physics applications

    NASA Astrophysics Data System (ADS)

    Blackburn, Megan Satterfield

    2009-12-01

    Radiation therapy has become a very import method for treating cancer patients. Thus, it is extremely important to accurately determine the location of energy deposition during these treatments, maximizing dose to the tumor region and minimizing it to healthy tissue. A Coarse-Mesh Transport Method (COMET) has been developed at the Georgia Institute of Technology in the Computational Reactor and Medical Physics Group for use very successfully with neutron transport to analyze whole-core criticality. COMET works by decomposing a large, heterogeneous system into a set of smaller fixed source problems. For each unique local problem that exists, a solution is obtained that we call a response function. These response functions are pre-computed and stored in a library for future use. The overall solution to the global problem can then be found by a linear superposition of these local problems. This method has now been extended to the transport of photons and electrons for use in medical physics problems to determine energy deposition from radiation therapy treatments. The main goal of this work was to develop benchmarks for testing in order to evaluate the COMET code to determine its strengths and weaknesses for these medical physics applications. For response function calculations, legendre polynomial expansions are necessary for space, angle, polar angle, and azimuthal angle. An initial sensitivity study was done to determine the best orders for future testing. After the expansion orders were found, three simple benchmarks were tested: a water phantom, a simplified lung phantom, and a non-clinical slab phantom. Each of these benchmarks was decomposed into 1cm x 1cm and 0.5cm x 0.5cm coarse meshes. Three more clinically relevant problems were developed from patient CT scans. These benchmarks modeled a lung patient, a prostate patient, and a beam re-entry situation. As before, the problems were divided into 1cm x 1cm, 0.5cm x 0.5cm, and 0.25cm x 0.25cm coarse mesh cases. Multiple beam energies were also tested for each case. The COMET solutions for each case were compared to a reference solution obtained by pure Monte Carlo results from EGSnrc. When comparing the COMET results to the reference cases, a pattern of differences appeared in each phantom case. It was found that better results were obtained for lower energy incident photon beams as well as for larger mesh sizes. Possible changes may need to be made with the expansion orders used for energy and angle to better model high energy secondary electrons. Heterogeneity also did not pose a problem for the COMET methodology. Heterogeneous results were found in a comparable amount of time to the homogeneous water phantom. The COMET results were typically found in minutes to hours of computational time, whereas the reference cases typically required hundreds or thousands of hours. A second sensitivity study was also performed on a more stringent problem and with smaller coarse meshes. Previously, the same expansion order was used for each incident photon beam energy so better comparisons could be made. From this second study, it was found that it is optimal to have different expansion orders based on the incident beam energy. Recommendations for future work with this method include more testing on higher expansion orders or possible code modification to better handle secondary electrons. The method also needs to handle more clinically relevant beam descriptions with an energy and angular distribution associated with it.

  16. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir

    PubMed Central

    Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-01-01

    Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061

  17. Investigation on the Accuracy of Superposition Predictions of Film Cooling Effectiveness

    NASA Astrophysics Data System (ADS)

    Meng, Tong; Zhu, Hui-ren; Liu, Cun-liang; Wei, Jian-sheng

    2018-05-01

    Film cooling effectiveness on flat plates with double rows of holes has been studied experimentally and numerically in this paper. This configuration is widely used to simulate the multi-row film cooling on turbine vane. Film cooling effectiveness of double rows of holes and each single row was used to study the accuracy of superposition predictions. Method of stable infrared measurement technique was used to measure the surface temperature on the flat plate. This paper analyzed the factors that affect the film cooling effectiveness including hole shape, hole arrangement, row-to-row spacing and blowing ratio. Numerical simulations were performed to analyze the flow structure and film cooling mechanisms between each film cooling row. Results show that the blowing ratio within the range of 0.5 to 2 has a significant influence on the accuracy of superposition predictions. At low blowing ratios, results obtained by superposition method agree well with the experimental data. While at high blowing ratios, the accuracy of superposition prediction decreases. Another significant factor is hole arrangement. Results obtained by superposition prediction are nearly the same as experimental values of staggered arrangement structures. For in-line configurations, the superposition values of film cooling effectiveness are much higher than experimental data. For different hole shapes, the accuracy of superposition predictions on converging-expanding holes is better than cylinder holes and compound angle holes. For two different hole spacing structures in this paper, predictions show good agreement with the experiment results.

  18. The effect of tandem-ovoid titanium applicator on points A, B, bladder, and rectum doses in gynecological brachytherapy using 192Ir.

    PubMed

    Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani

    2018-02-01

    The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.

  19. THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.

    PubMed

    Theobald, Douglas L; Wuttke, Deborah S

    2006-09-01

    THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.

  20. A new modal superposition method for nonlinear vibration analysis of structures using hybrid mode shapes

    NASA Astrophysics Data System (ADS)

    Ferhatoglu, Erhan; Cigeroglu, Ender; Özgüven, H. Nevzat

    2018-07-01

    In this paper, a new modal superposition method based on a hybrid mode shape concept is developed for the determination of steady state vibration response of nonlinear structures. The method is developed specifically for systems having nonlinearities where the stiffness of the system may take different limiting values. Stiffness variation of these nonlinear systems enables one to define different linear systems corresponding to each value of the limiting equivalent stiffness. Moreover, the response of the nonlinear system is bounded by the confinement of these linear systems. In this study, a modal superposition method utilizing novel hybrid mode shapes which are defined as linear combinations of the modal vectors of the limiting linear systems is proposed to determine periodic response of nonlinear systems. In this method the response of the nonlinear system is written in terms of hybrid modes instead of the modes of the underlying linear system. This provides decrease of the number of modes that should be retained for an accurate solution, which in turn reduces the number of nonlinear equations to be solved. In this way, computational time for response calculation is directly curtailed. In the solution, the equations of motion are converted to a set of nonlinear algebraic equations by using describing function approach, and the numerical solution is obtained by using Newton's method with arc-length continuation. The method developed is applied on two different systems: a lumped parameter model and a finite element model. Several case studies are performed and the accuracy and computational efficiency of the proposed modal superposition method with hybrid mode shapes are compared with those of the classical modal superposition method which utilizes the mode shapes of the underlying linear system.

  1. Method and system for mesh network embedded devices

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for managing mesh network devices. A mesh network device with integrated features creates an N-way mesh network with a full mesh network topology or a partial mesh network topology.

  2. A novel method for pair-matching using three-dimensional digital models of bone: mesh-to-mesh value comparison.

    PubMed

    Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F

    2016-09-01

    The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.

  3. Optical information encryption based on incoherent superposition with the help of the QR code

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Gong, Qiong

    2014-01-01

    In this paper, a novel optical information encryption approach is proposed with the help of QR code. This method is based on the concept of incoherent superposition which we introduce for the first time. The information to be encrypted is first transformed into the corresponding QR code, and thereafter the QR code is further encrypted into two phase only masks analytically by use of the intensity superposition of two diffraction wave fields. The proposed method has several advantages over the previous interference-based method, such as a higher security level, a better robustness against noise attack, a more relaxed work condition, and so on. Numerical simulation results and actual smartphone collected results are shown to validate our proposal.

  4. Application of the superposition principle to solar-cell analysis

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.

    1979-01-01

    The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).

  5. SU-E-T-91: Accuracy of Dose Calculation Algorithms for Patients Undergoing Stereotactic Ablative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajaldeen, A; Ramachandran, P; Geso, M

    2015-06-15

    Purpose: The purpose of this study was to investigate and quantify the variation in dose distributions in small field lung cancer radiotherapy using seven different dose calculation algorithms. Methods: The study was performed in 21 lung cancer patients who underwent Stereotactic Ablative Body Radiotherapy (SABR). Two different methods (i) Same dose coverage to the target volume (named as same dose method) (ii) Same monitor units in all algorithms (named as same monitor units) were used for studying the performance of seven different dose calculation algorithms in XiO and Eclipse treatment planning systems. The seven dose calculation algorithms include Superposition, Fastmore » superposition, Fast Fourier Transform ( FFT) Convolution, Clarkson, Anisotropic Analytic Algorithm (AAA), Acurous XB and pencil beam (PB) algorithms. Prior to this, a phantom study was performed to assess the accuracy of these algorithms. Superposition algorithm was used as a reference algorithm in this study. The treatment plans were compared using different dosimetric parameters including conformity, heterogeneity and dose fall off index. In addition to this, the dose to critical structures like lungs, heart, oesophagus and spinal cord were also studied. Statistical analysis was performed using Prism software. Results: The mean±stdev with conformity index for Superposition, Fast superposition, Clarkson and FFT convolution algorithms were 1.29±0.13, 1.31±0.16, 2.2±0.7 and 2.17±0.59 respectively whereas for AAA, pencil beam and Acurous XB were 1.4±0.27, 1.66±0.27 and 1.35±0.24 respectively. Conclusion: Our study showed significant variations among the seven different algorithms. Superposition and AcurosXB algorithms showed similar values for most of the dosimetric parameters. Clarkson, FFT convolution and pencil beam algorithms showed large differences as compared to superposition algorithms. Based on our study, we recommend Superposition and AcurosXB algorithms as the first choice of algorithms in lung cancer radiotherapy involving small fields. However, further investigation by Monte Carlo simulation is required to confirm our results.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkoff, T. J., E-mail: adidasty@gmail.com

    We motivate and introduce a class of “hierarchical” quantum superposition states of N coupled quantum oscillators. Unlike other well-known multimode photonic Schrödinger-cat states such as entangled coherent states, the hierarchical superposition states are characterized as two-branch superpositions of tensor products of single-mode Schrödinger-cat states. In addition to analyzing the photon statistics and quasiprobability distributions of prominent examples of these nonclassical states, we consider their usefulness for highprecision quantum metrology of nonlinear optical Hamiltonians and quantify their mode entanglement. We propose two methods for generating hierarchical superpositions in N = 2 coupled microwave cavities, exploiting currently existing quantum optical technology formore » generating entanglement between spatially separated electromagnetic field modes.« less

  7. Sagnac interferometry with coherent vortex superposition states in exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Moxley, Frederick Ira; Dowling, Jonathan P.; Dai, Weizhong; Byrnes, Tim

    2016-05-01

    We investigate prospects of using counter-rotating vortex superposition states in nonequilibrium exciton-polariton Bose-Einstein condensates for the purposes of Sagnac interferometry. We first investigate the stability of vortex-antivortex superposition states, and show that they survive at steady state in a variety of configurations. Counter-rotating vortex superpositions are of potential interest to gyroscope and seismometer applications for detecting rotations. Methods of improving the sensitivity are investigated by targeting high momentum states via metastable condensation, and the application of periodic lattices. The sensitivity of the polariton gyroscope is compared to its optical and atomic counterparts. Due to the large interferometer areas in optical systems and small de Broglie wavelengths for atomic BECs, the sensitivity per detected photon is found to be considerably less for the polariton gyroscope than with competing methods. However, polariton gyroscopes have an advantage over atomic BECs in a high signal-to-noise ratio, and have other practical advantages such as room-temperature operation, area independence, and robust design. We estimate that the final sensitivities including signal-to-noise aspects are competitive with existing methods.

  8. Real-time dose computation: GPU-accelerated source modeling and superposition/convolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques, Robert; Wong, John; Taylor, Russell

    Purpose: To accelerate dose calculation to interactive rates using highly parallel graphics processing units (GPUs). Methods: The authors have extended their prior work in GPU-accelerated superposition/convolution with a modern dual-source model and have enhanced performance. The primary source algorithm supports both focused leaf ends and asymmetric rounded leaf ends. The extra-focal algorithm uses a discretized, isotropic area source and models multileaf collimator leaf height effects. The spectral and attenuation effects of static beam modifiers were integrated into each source's spectral function. The authors introduce the concepts of arc superposition and delta superposition. Arc superposition utilizes separate angular sampling for themore » total energy released per unit mass (TERMA) and superposition computations to increase accuracy and performance. Delta superposition allows single beamlet changes to be computed efficiently. The authors extended their concept of multi-resolution superposition to include kernel tilting. Multi-resolution superposition approximates solid angle ray-tracing, improving performance and scalability with a minor loss in accuracy. Superposition/convolution was implemented using the inverse cumulative-cumulative kernel and exact radiological path ray-tracing. The accuracy analyses were performed using multiple kernel ray samplings, both with and without kernel tilting and multi-resolution superposition. Results: Source model performance was <9 ms (data dependent) for a high resolution (400{sup 2}) field using an NVIDIA (Santa Clara, CA) GeForce GTX 280. Computation of the physically correct multispectral TERMA attenuation was improved by a material centric approach, which increased performance by over 80%. Superposition performance was improved by {approx}24% to 0.058 and 0.94 s for 64{sup 3} and 128{sup 3} water phantoms; a speed-up of 101-144x over the highly optimized Pinnacle{sup 3} (Philips, Madison, WI) implementation. Pinnacle{sup 3} times were 8.3 and 94 s, respectively, on an AMD (Sunnyvale, CA) Opteron 254 (two cores, 2.8 GHz). Conclusions: The authors have completed a comprehensive, GPU-accelerated dose engine in order to provide a substantial performance gain over CPU based implementations. Real-time dose computation is feasible with the accuracy levels of the superposition/convolution algorithm.« less

  9. A methodology for quadrilateral finite element mesh coarsening

    DOE PAGES

    Staten, Matthew L.; Benzley, Steven; Scott, Michael

    2008-03-27

    High fidelity finite element modeling of continuum mechanics problems often requires using all quadrilateral or all hexahedral meshes. The efficiency of such models is often dependent upon the ability to adapt a mesh to the physics of the phenomena. Adapting a mesh requires the ability to both refine and/or coarsen the mesh. The algorithms available to refine and coarsen triangular and tetrahedral meshes are very robust and efficient. However, the ability to locally and conformally refine or coarsen all quadrilateral and all hexahedral meshes presents many difficulties. Some research has been done on localized conformal refinement of quadrilateral and hexahedralmore » meshes. However, little work has been done on localized conformal coarsening of quadrilateral and hexahedral meshes. A general method which provides both localized conformal coarsening and refinement for quadrilateral meshes is presented in this paper. This method is based on restructuring the mesh with simplex manipulations to the dual of the mesh. Finally, this method appears to be extensible to hexahedral meshes in three dimensions.« less

  10. Method and apparatus for connecting finite element meshes and performing simulations therewith

    DOEpatents

    Dohrmann, Clark R.; Key, Samuel W.; Heinstein, Martin W.

    2003-05-06

    The present invention provides a method of connecting dissimilar finite element meshes. A first mesh, designated the master mesh, and a second mesh, designated the slave mesh, each have interface surfaces proximal the other. Each interface surface has a corresponding interface mesh comprising a plurality of interface nodes. Each slave interface node is assigned new coordinates locating the interface node on the interface surface of the master mesh. The slave interface surface is further redefined to be the projection of the slave interface mesh onto the master interface surface.

  11. Automatic superposition of drug molecules based on their common receptor site

    NASA Astrophysics Data System (ADS)

    Kato, Yuichi; Inoue, Atsushi; Yamada, Miho; Tomioka, Nobuo; Itai, Akiko

    1992-10-01

    We have prevously developed a new rational method for superposing molecules in terms of submolecular physical and chemical properties, but not in terms of atom positions or chemical structures as has been done in the conventional methods. The program was originally developed for interactive use on a three-dimensional graphic display, providing goodness-of-fit indices on molecular shape, hydrogen bonds, electrostatic interactions and others. Here, we report a new unbiased searching method for the best superposition of molecules, covering all the superposing modes and conformational freedom, as an additional function of the program. The function is based on a novel least-squares method which superposes the expected positions and orientations of hydrogen bonding partners in the receptor that are deduced from both molecules. The method not only gives reliability and reproducibility to the result of the superposition, but also allows us to save labor and time. It is demonstrated that this method is very efficient for finding the correct superposing mode in such systems where hydrogen bonds play important roles.

  12. Past, Present and Future of Surgical Meshes: A Review.

    PubMed

    Baylón, Karen; Rodríguez-Camarillo, Perla; Elías-Zúñiga, Alex; Díaz-Elizondo, Jose Antonio; Gilkerson, Robert; Lozano, Karen

    2017-08-22

    Surgical meshes, in particular those used to repair hernias, have been in use since 1891. Since then, research in the area has expanded, given the vast number of post-surgery complications such as infection, fibrosis, adhesions, mesh rejection, and hernia recurrence. Researchers have focused on the analysis and implementation of a wide range of materials: meshes with different fiber size and porosity, a variety of manufacturing methods, and certainly a variety of surgical and implantation procedures. Currently, surface modification methods and development of nanofiber based systems are actively being explored as areas of opportunity to retain material strength and increase biocompatibility of available meshes. This review summarizes the history of surgical meshes and presents an overview of commercial surgical meshes, their properties, manufacturing methods, and observed biological response, as well as the requirements for an ideal surgical mesh and potential manufacturing methods.

  13. Optimal Superpositioning of Flexible Molecule Ensembles

    PubMed Central

    Gapsys, Vytautas; de Groot, Bert L.

    2013-01-01

    Analysis of the internal dynamics of a biological molecule requires the successful removal of overall translation and rotation. Particularly for flexible or intrinsically disordered peptides, this is a challenging task due to the absence of a well-defined reference structure that could be used for superpositioning. In this work, we started the analysis with a widely known formulation of an objective for the problem of superimposing a set of multiple molecules as variance minimization over an ensemble. A negative effect of this superpositioning method is the introduction of ambiguous rotations, where different rotation matrices may be applied to structurally similar molecules. We developed two algorithms to resolve the suboptimal rotations. The first approach minimizes the variance together with the distance of a structure to a preceding molecule in the ensemble. The second algorithm seeks for minimal variance together with the distance to the nearest neighbors of each structure. The newly developed methods were applied to molecular-dynamics trajectories and normal-mode ensembles of the Aβ peptide, RS peptide, and lysozyme. These new (to our knowledge) superpositioning methods combine the benefits of variance and distance between nearest-neighbor(s) minimization, providing a solution for the analysis of intrinsic motions of flexible molecules and resolving ambiguous rotations. PMID:23332072

  14. Method of modifying a volume mesh using sheet extraction

    DOEpatents

    Borden, Michael J [Albuquerque, NM; Shepherd, Jason F [Albuquerque, NM

    2007-02-20

    A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet extraction. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of determining a sheet of hexahedral mesh elements, generating nodes for merging, and merging the nodes to delete the sheet of hexahedral mesh elements and modify the volume mesh.

  15. Method for generating a mesh representation of a region characterized by a trunk and a branch thereon

    DOEpatents

    Shepherd, Jason [Albuquerque, NM; Mitchell, Scott A [Albuquerque, NM; Jankovich, Steven R [Anaheim, CA; Benzley, Steven E [Provo, UT

    2007-05-15

    The present invention provides a meshing method, called grafting, that lifts the prior art constraint on abutting surfaces, including surfaces that are linking, source/target, or other types of surfaces of the trunk volume. The grafting method locally modifies the structured mesh of the linking surfaces allowing the mesh to conform to additional surface features. Thus, the grafting method can provide a transition between multiple sweep directions extending sweeping algorithms to 23/4-D solids. The method is also suitable for use with non-sweepable volumes; the method provides a transition between meshes generated by methods other than sweeping as well.

  16. Past, Present and Future of Surgical Meshes: A Review

    PubMed Central

    Baylón, Karen; Rodríguez-Camarillo, Perla; Elías-Zúñiga, Alex; Díaz-Elizondo, Jose Antonio; Gilkerson, Robert; Lozano, Karen

    2017-01-01

    Surgical meshes, in particular those used to repair hernias, have been in use since 1891. Since then, research in the area has expanded, given the vast number of post-surgery complications such as infection, fibrosis, adhesions, mesh rejection, and hernia recurrence. Researchers have focused on the analysis and implementation of a wide range of materials: meshes with different fiber size and porosity, a variety of manufacturing methods, and certainly a variety of surgical and implantation procedures. Currently, surface modification methods and development of nanofiber based systems are actively being explored as areas of opportunity to retain material strength and increase biocompatibility of available meshes. This review summarizes the history of surgical meshes and presents an overview of commercial surgical meshes, their properties, manufacturing methods, and observed biological response, as well as the requirements for an ideal surgical mesh and potential manufacturing methods. PMID:28829367

  17. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  18. Mesh refinement in finite element analysis by minimization of the stiffness matrix trace

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1989-01-01

    Most finite element packages provide means to generate meshes automatically. However, the user is usually confronted with the problem of not knowing whether the mesh generated is appropriate for the problem at hand. Since the accuracy of the finite element results is mesh dependent, mesh selection forms a very important step in the analysis. Indeed, in accurate analyses, meshes need to be refined or rezoned until the solution converges to a value so that the error is below a predetermined tolerance. A-posteriori methods use error indicators, developed by using the theory of interpolation and approximation theory, for mesh refinements. Some use other criterions, such as strain energy density variation and stress contours for example, to obtain near optimal meshes. Although these methods are adaptive, they are expensive. Alternatively, a priori methods, until now available, use geometrical parameters, for example, element aspect ratio. Therefore, they are not adaptive by nature. An adaptive a-priori method is developed. The criterion is that the minimization of the trace of the stiffness matrix with respect to the nodal coordinates, leads to a minimization of the potential energy, and as a consequence provide a good starting mesh. In a few examples the method is shown to provide the optimal mesh. The method is also shown to be relatively simple and amenable to development of computer algorithms. When the procedure is used in conjunction with a-posteriori methods of grid refinement, it is shown that fewer refinement iterations and fewer degrees of freedom are required for convergence as opposed to when the procedure is not used. The mesh obtained is shown to have uniform distribution of stiffness among the nodes and elements which, as a consequence, leads to uniform error distribution. Thus the mesh obtained meets the optimality criterion of uniform error distribution.

  19. Static-transmission-error vibratory-excitation contributions from plastically deformed gear teeth caused by tooth bending-fatigue damage

    NASA Astrophysics Data System (ADS)

    Mark, W. D.; Reagor, C. P.

    2007-02-01

    To assess gear health and detect gear-tooth damage, the vibratory response from meshing gear-pair excitations is commonly monitored by accelerometers. In an earlier paper, strong evidence was presented suggesting that, in the case of tooth bending-fatigue damage, the principal source of detectable damage is whole-tooth plastic deformation; i.e. yielding, rather than changes in tooth stiffness caused by tooth-root cracks. Such plastic deformations are geometric deviation contributions to the "static-transmission-error" (STE) vibratory excitation caused by meshing gear pairs. The STE contributions caused by two likely occurring forms of such plastic deformations on a single tooth are derived, and displayed in the time domain as a function of involute "roll distance." Example calculations are provided for transverse contact ratios of Qt=1.4 and 1.8, for spur gears and for helical-gear axial contact ratios ranging from Qa=1.2 to Qa=3.6. Low-pass- and band-pass-filtered versions of these same STE contributions also are computed and displayed in the time domain. Several calculations, consisting of superposition of the computed STE tooth-meshing fundamental harmonic contribution and the band-pass STE contribution caused by a plastically deformed tooth, exhibit the amplitude and frequency or phase modulation character commonly observed in accelerometer-response waveforms caused by damaged teeth. General formulas are provided that enable computation of these STE vibratory-excitation contributions for any form of plastic deformation on any number of teeth for spur and helical gears with any contact ratios.

  20. Fast, large-scale hologram calculation in wavelet domain

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  1. Mesh versus bathtub - effects of flood models on exposure analysis in Switzerland

    NASA Astrophysics Data System (ADS)

    Röthlisberger, Veronika; Zischg, Andreas; Keiler, Margreth

    2016-04-01

    In Switzerland, mainly two types of maps that indicate potential flood zones are available for flood exposure analyses: 1) Aquaprotect, a nationwide overview provided by the Federal Office for the Environment and 2) communal flood hazard maps available from the 26 cantons. The model used to produce Aquaprotect can be described as a bathtub approach or linear superposition method with three main parameters, namely the horizontal and vertical distance of a point to water features and the size of the river sub-basin. Whereas the determination of flood zones in Aquaprotect is based on a uniform, nationwide model, the communal flood hazard maps are less homogenous, as they have been elaborated either at communal or cantonal levels. Yet their basic content (i.e. indication of potential flood zones for three recurrence periods, with differentiation of at least three inundation depths) is described in national directives and the vast majority of communal flood hazard maps are based on 2D inundation simulations using meshes. Apart from the methodical differences between Aquaprotect and the communal flood hazard maps (and among different communal flood hazard maps), all of these maps include a layer with a similar recurrence period (i.e. Aquaprotect 250 years, flood hazard maps 300 years) beyond the intended protection level of installed structural systems. In our study, we compare the resulting exposure by overlaying the two types of flood maps with a complete, harmonized, and nationwide dataset of building polygons. We assess the different exposure at the national level, and also consider differences among the 26 cantons and the six biogeographically unique regions, respectively. It was observed that while the nationwide exposure rates for both types of flood maps are similar, the differences within certain cantons and biogeographical regions are remarkable. We conclude that flood maps based on bathtub models are appropriate for assessments at national levels, while maps based on 2D simulations are preferable at sub-national levels.

  2. A method to generate conformal finite-element meshes from 3D measurements of microstructurally small fatigue-crack propagation [A method to generate conformal finite-element meshes from 3D measurements of microstructurally small fatigue-crack propagation: 3D Meshes of Microstructurally Small Crack Growth

    DOE PAGES

    Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.; ...

    2016-04-27

    In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less

  3. A method to generate conformal finite-element meshes from 3D measurements of microstructurally small fatigue-crack propagation [A method to generate conformal finite-element meshes from 3D measurements of microstructurally small fatigue-crack propagation: 3D Meshes of Microstructurally Small Crack Growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spear, Ashley D.; Hochhalter, Jacob D.; Cerrone, Albert R.

    In an effort to reproduce computationally the observed evolution of microstructurally small fatigue cracks (MSFCs), a method is presented for generating conformal, finite-element (FE), volume meshes from 3D measurements of MSFC propagation. The resulting volume meshes contain traction-free surfaces that conform to incrementally measured 3D crack shapes. Grain morphologies measured using near-field high-energy X-ray diffraction microscopy are also represented within the FE volume meshes. Proof-of-concept simulations are performed to demonstrate the utility of the mesh-generation method. The proof-of-concept simulations employ a crystal-plasticity constitutive model and are performed using the conformal FE meshes corresponding to successive crack-growth increments. Although the simulationsmore » for each crack increment are currently independent of one another, they need not be, and transfer of material-state information among successive crack-increment meshes is discussed. The mesh-generation method was developed using post-mortem measurements, yet it is general enough that it can be applied to in-situ measurements of 3D MSFC propagation.« less

  4. Classification of ligand molecules in PDB with graph match-based structural superposition.

    PubMed

    Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi

    2016-12-01

    The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.

  5. Optical threshold secret sharing scheme based on basic vector operations and coherence superposition

    NASA Astrophysics Data System (ADS)

    Deng, Xiaopeng; Wen, Wei; Mi, Xianwu; Long, Xuewen

    2015-04-01

    We propose, to our knowledge for the first time, a simple optical algorithm for secret image sharing with the (2,n) threshold scheme based on basic vector operations and coherence superposition. The secret image to be shared is firstly divided into n shadow images by use of basic vector operations. In the reconstruction stage, the secret image can be retrieved by recording the intensity of the coherence superposition of any two shadow images. Compared with the published encryption techniques which focus narrowly on information encryption, the proposed method can realize information encryption as well as secret sharing, which further ensures the safety and integrality of the secret information and prevents power from being kept centralized and abused. The feasibility and effectiveness of the proposed method are demonstrated by numerical results.

  6. Minimizing finite-volume discretization errors on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian

    2017-11-01

    Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.

  7. Objective identification of residue ranges for the superposition of protein structures

    PubMed Central

    2011-01-01

    Background The automation of objectively selecting amino acid residue ranges for structure superpositions is important for meaningful and consistent protein structure analyses. So far there is no widely-used standard for choosing these residue ranges for experimentally determined protein structures, where the manual selection of residue ranges or the use of suboptimal criteria remain commonplace. Results We present an automated and objective method for finding amino acid residue ranges for the superposition and analysis of protein structures, in particular for structure bundles resulting from NMR structure calculations. The method is implemented in an algorithm, CYRANGE, that yields, without protein-specific parameter adjustment, appropriate residue ranges in most commonly occurring situations, including low-precision structure bundles, multi-domain proteins, symmetric multimers, and protein complexes. Residue ranges are chosen to comprise as many residues of a protein domain that increasing their number would lead to a steep rise in the RMSD value. Residue ranges are determined by first clustering residues into domains based on the distance variance matrix, and then refining for each domain the initial choice of residues by excluding residues one by one until the relative decrease of the RMSD value becomes insignificant. A penalty for the opening of gaps favours contiguous residue ranges in order to obtain a result that is as simple as possible, but not simpler. Results are given for a set of 37 proteins and compared with those of commonly used protein structure validation packages. We also provide residue ranges for 6351 NMR structures in the Protein Data Bank. Conclusions The CYRANGE method is capable of automatically determining residue ranges for the superposition of protein structure bundles for a large variety of protein structures. The method correctly identifies ordered regions. Global structure superpositions based on the CYRANGE residue ranges allow a clear presentation of the structure, and unnecessary small gaps within the selected ranges are absent. In the majority of cases, the residue ranges from CYRANGE contain fewer gaps and cover considerably larger parts of the sequence than those from other methods without significantly increasing the RMSD values. CYRANGE thus provides an objective and automatic method for standardizing the choice of residue ranges for the superposition of protein structures. PMID:21592348

  8. Overset meshing coupled with hybridizable discontinuous Galerkin finite elements

    DOE PAGES

    Kauffman, Justin A.; Sheldon, Jason P.; Miller, Scott T.

    2017-03-01

    We introduce the use of hybridizable discontinuous Galerkin (HDG) finite element methods on overlapping (overset) meshes. Overset mesh methods are advantageous for solving problems on complex geometrical domains. We also combine geometric flexibility of overset methods with the advantages of HDG methods: arbitrarily high-order accuracy, reduced size of the global discrete problem, and the ability to solve elliptic, parabolic, and/or hyperbolic problems with a unified form of discretization. This approach to developing the ‘overset HDG’ method is to couple the global solution from one mesh to the local solution on the overset mesh. We present numerical examples for steady convection–diffusionmore » and static elasticity problems. The examples demonstrate optimal order convergence in all primal fields for an arbitrary amount of overlap of the underlying meshes.« less

  9. Method of modifying a volume mesh using sheet insertion

    DOEpatents

    Borden, Michael J [Albuquerque, NM; Shepherd, Jason F [Albuquerque, NM

    2006-08-29

    A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet insertion. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify (refine) the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of locating a sheet of hexahedral mesh elements, determining a plurality of hexahedral elements within the sheet to refine, shrinking the plurality of elements, and inserting a new sheet of hexahedral elements adjacently to modify the volume mesh. Additionally, another predetermined algorithm using mesh cutting may be followed to modify a volume mesh.

  10. A mesh regeneration method using quadrilateral and triangular elements for compressible flows

    NASA Technical Reports Server (NTRS)

    Vemaganti, G. R.; Thornton, E. A.

    1989-01-01

    An adaptive remeshing method using both triangular and quadrilateral elements suitable for high-speed viscous flows is presented. For inviscid flows, the method generates completely unstructured meshes. For viscous flows, structured meshes are generated for boundary layers, and unstructured meshes are generated for inviscid flow regions. Examples of inviscid and viscous adaptations for high-speed flows are presented.

  11. Investigation on the cavitation effect of underwater shock near different boundaries

    NASA Astrophysics Data System (ADS)

    Xiao, Wei; Wei, Hai-peng; Feng, Liang

    2017-08-01

    When the shock wave of underwater explosion propagates to the surfaces of different boundaries, it gets reflected. Then, a negative pressure area is formed by the superposition of the incident wave and reflected wave. Cavitation occurs when the value of the negative pressure falls below the vapor pressure of water. An improved numerical model based on the spectral element method is applied to investigate the cavitation effect of underwater shock near different boundaries, mainly including the feature of cavitation effect near different boundaries and the influence of different parameters on cavitation effect. In the implementation of the improved numerical model, the bilinear equation of state is used to deal with the fluid field subjected to cavitation, and the field separation technique is employed to avoid the distortion of incident wave propagating through the mesh and the second-order doubly asymptotic approximation is applied to simulate the non-reflecting boundary. The main results are as follows. As the peak pressure and decay constant of shock wave increases, the range of cavitation domain increases, and the duration of cavitation increases. As the depth of water increases, the influence of cavitation on the dynamic response of spherical shell decreases.

  12. A weak Galerkin least-squares finite element method for div-curl systems

    NASA Astrophysics Data System (ADS)

    Li, Jichun; Ye, Xiu; Zhang, Shangyou

    2018-06-01

    In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.

  13. Connectivity-based, all-hexahedral mesh generation method and apparatus

    DOEpatents

    Tautges, T.J.; Mitchell, S.A.; Blacker, T.D.; Murdoch, P.

    1998-06-16

    The present invention is a computer-based method and apparatus for constructing all-hexahedral finite element meshes for finite element analysis. The present invention begins with a three-dimensional geometry and an all-quadrilateral surface mesh, then constructs hexahedral element connectivity from the outer boundary inward, and then resolves invalid connectivity. The result of the present invention is a complete representation of hex mesh connectivity only; actual mesh node locations are determined later. The basic method of the present invention comprises the step of forming hexahedral elements by making crossings of entities referred to as ``whisker chords.`` This step, combined with a seaming operation in space, is shown to be sufficient for meshing simple block problems. Entities that appear when meshing more complex geometries, namely blind chords, merged sheets, and self-intersecting chords, are described. A method for detecting invalid connectivity in space, based on repeated edges, is also described, along with its application to various cases of invalid connectivity introduced and resolved by the method. 79 figs.

  14. Connectivity-based, all-hexahedral mesh generation method and apparatus

    DOEpatents

    Tautges, Timothy James; Mitchell, Scott A.; Blacker, Ted D.; Murdoch, Peter

    1998-01-01

    The present invention is a computer-based method and apparatus for constructing all-hexahedral finite element meshes for finite element analysis. The present invention begins with a three-dimensional geometry and an all-quadrilateral surface mesh, then constructs hexahedral element connectivity from the outer boundary inward, and then resolves invalid connectivity. The result of the present invention is a complete representation of hex mesh connectivity only; actual mesh node locations are determined later. The basic method of the present invention comprises the step of forming hexahedral elements by making crossings of entities referred to as "whisker chords." This step, combined with a seaming operation in space, is shown to be sufficient for meshing simple block problems. Entities that appear when meshing more complex geometries, namely blind chords, merged sheets, and self-intersecting chords, are described. A method for detecting invalid connectivity in space, based on repeated edges, is also described, along with its application to various cases of invalid connectivity introduced and resolved by the method.

  15. Numerical research on the lateral global buckling characteristics of a high temperature and pressure pipeline with two initial imperfections

    PubMed Central

    Liu, Wenbin; Liu, Aimin

    2018-01-01

    With the exploitation of offshore oil and gas gradually moving to deep water, higher temperature differences and pressure differences are applied to the pipeline system, making the global buckling of the pipeline more serious. For unburied deep-water pipelines, the lateral buckling is the major buckling form. The initial imperfections widely exist in the pipeline system due to manufacture defects or the influence of uneven seabed, and the distribution and geometry features of initial imperfections are random. They can be divided into two kinds based on shape: single-arch imperfections and double-arch imperfections. This paper analyzed the global buckling process of a pipeline with 2 initial imperfections by using a numerical simulation method and revealed how the ratio of the initial imperfection’s space length to the imperfection’s wavelength and the combination of imperfections affects the buckling process. The results show that a pipeline with 2 initial imperfections may suffer the superposition of global buckling. The growth ratios of buckling displacement, axial force and bending moment in the superposition zone are several times larger than no buckling superposition pipeline. The ratio of the initial imperfection’s space length to the imperfection’s wavelength decides whether a pipeline suffers buckling superposition. The potential failure point of pipeline exhibiting buckling superposition is as same as the no buckling superposition pipeline, but the failure risk of pipeline exhibiting buckling superposition is much higher. The shape and direction of two nearby imperfections also affects the failure risk of pipeline exhibiting global buckling superposition. The failure risk of pipeline with two double-arch imperfections is higher than pipeline with two single-arch imperfections. PMID:29554123

  16. Adaptive moving mesh methods for simulating one-dimensional groundwater problems with sharp moving fronts

    USGS Publications Warehouse

    Huang, W.; Zheng, Lingyun; Zhan, X.

    2002-01-01

    Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.

  17. A propagation method with adaptive mesh grid based on wave characteristics for wave optics simulation

    NASA Astrophysics Data System (ADS)

    Tang, Qiuyan; Wang, Jing; Lv, Pin; Sun, Quan

    2015-10-01

    Propagation simulation method and choosing mesh grid are both very important to get the correct propagation results in wave optics simulation. A new angular spectrum propagation method with alterable mesh grid based on the traditional angular spectrum method and the direct FFT method is introduced. With this method, the sampling space after propagation is not limited to propagation methods no more, but freely alterable. However, choosing mesh grid on target board influences the validity of simulation results directly. So an adaptive mesh choosing method based on wave characteristics is proposed with the introduced propagation method. We can calculate appropriate mesh grids on target board to get satisfying results. And for complex initial wave field or propagation through inhomogeneous media, we can also calculate and set the mesh grid rationally according to above method. Finally, though comparing with theoretical results, it's shown that the simulation result with the proposed method coinciding with theory. And by comparing with the traditional angular spectrum method and the direct FFT method, it's known that the proposed method is able to adapt to a wider range of Fresnel number conditions. That is to say, the method can simulate propagation results efficiently and correctly with propagation distance of almost zero to infinity. So it can provide better support for more wave propagation applications such as atmospheric optics, laser propagation and so on.

  18. Merge measuring mesh for complex surface parts

    NASA Astrophysics Data System (ADS)

    Ye, Jianhua; Gao, Chenghui; Zeng, Shoujin; Xu, Mingsan

    2018-04-01

    Due to most parts self-occlude and limitation of scanner range, it is difficult to scan the entire part by one time. For modeling of part, multi measuring meshes need to be merged. In this paper, a new merge method is presented. At first, using the grid voxelization method to eliminate the most of non-overlap regions, and retrieval overlap triangles method by the topology of mesh is proposed due to its ability to improve the efficiency. Then, to remove the large deviation of overlap triangles, deleting by overlap distance is discussion. After that, this paper puts forward a new method of merger meshes by registration and combination mesh boundary point. Through experimental analysis, the suggested methods are effective.

  19. Computing Normal Shock-Isotropic Turbulence Interaction With Tetrahedral Meshes and the Space-Time CESE Method

    NASA Astrophysics Data System (ADS)

    Venkatachari, Balaji Shankar; Chang, Chau-Lyan

    2016-11-01

    The focus of this study is scale-resolving simulations of the canonical normal shock- isotropic turbulence interaction using unstructured tetrahedral meshes and the space-time conservation element solution element (CESE) method. Despite decades of development in unstructured mesh methods and its potential benefits of ease of mesh generation around complex geometries and mesh adaptation, direct numerical or large-eddy simulations of turbulent flows are predominantly carried out using structured hexahedral meshes. This is due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for unstructured meshes that can resolve multiple physical scales and flow discontinuities simultaneously. The CESE method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to accurately simulate turbulent flows using tetrahedral meshes. As part of the study, various regimes of the shock-turbulence interaction (wrinkled and broken shock regimes) will be investigated along with a study on how adaptive refinement of tetrahedral meshes benefits this problem. The research funding for this paper has been provided by Revolutionary Computational Aerosciences (RCA) subproject under the NASA Transformative Aeronautics Concepts Program (TACP).

  20. A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid

    2004-01-01

    Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.

  1. Transient Response of Shells of Revolution by Direct Integration and Modal Superposition Methods

    NASA Technical Reports Server (NTRS)

    Stephens, W. B.; Adelman, H. M.

    1974-01-01

    The results of an analytical effort to obtain and evaluate transient response data for a cylindrical and a conical shell by use of two different approaches: direct integration and modal superposition are described. The inclusion of nonlinear terms is more important than the inclusion of secondary linear effects (transverse shear deformation and rotary inertia) although there are thin-shell structures where these secondary effects are important. The advantages of the direct integration approach are that geometric nonlinear and secondary effects are easy to include and high-frequency response may be calculated. In comparison to the modal superposition technique the computer storage requirements are smaller. The advantages of the modal superposition approach are that the solution is independent of the previous time history and that once the modal data are obtained, the response for repeated cases may be efficiently computed. Also, any admissible set of initial conditions can be applied.

  2. METHOD OF PREPARING A CERAMIC FUEL ELEMENT

    DOEpatents

    Ross, W.T.; Bloomster, C.H.; Bardsley, R.E.

    1963-09-01

    A method is described for preparing a fuel element from -325 mesh PuO/ sub 2/ and -20 mesh UO/sub 2/, and the steps of screening --325 mesh UO/sub 2/ from the -20 mesh UO/sub 2/, mixing PuO/sub 2/ with the --325 mesh UO/sub 2/, blending this mixture with sufficient --20 mesh UO/sub 2/ to obtain the desired composition, introducing the blend into a metal tube, repeating the procedure until the tube is full, and vibrating the tube to compact the powder are included. (AEC)

  3. A Linear-Elasticity Solver for Higher-Order Space-Time Mesh Deformation

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo T.; Murman, Scott M.

    2018-01-01

    A linear-elasticity approach is presented for the generation of meshes appropriate for a higher-order space-time discontinuous finite-element method. The equations of linear-elasticity are discretized using a higher-order, spatially-continuous, finite-element method. Given an initial finite-element mesh, and a specified boundary displacement, we solve for the mesh displacements to obtain a higher-order curvilinear mesh. Alternatively, for moving-domain problems we use the linear-elasticity approach to solve for a temporally discontinuous mesh velocity on each time-slab and recover a continuous mesh deformation by integrating the velocity. The applicability of this methodology is presented for several benchmark test cases.

  4. A coupled ALE-AMR method for shock hydrodynamics

    DOE PAGES

    Waltz, J.; Bakosi, J.

    2018-03-05

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  5. A coupled ALE-AMR method for shock hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waltz, J.; Bakosi, J.

    We present a numerical method combining adaptive mesh refinement (AMR) with arbitrary Lagrangian-Eulerian (ALE) mesh motion for the simulation of shock hydrodynamics on unstructured grids. The primary goal of the coupled method is to use AMR to reduce numerical error in ALE simulations at reduced computational expense relative to uniform fine mesh calculations, in the same manner that AMR has been used in Eulerian simulations. We also identify deficiencies with ALE methods that AMR is able to mitigate, and discuss the unique coupling challenges. The coupled method is demonstrated using three-dimensional unstructured meshes of up to O(10 7) tetrahedral cells.more » Convergence of ALE-AMR solutions towards both uniform fine mesh ALE results and analytic solutions is demonstrated. Speed-ups of 5-10× for a given level of error are observed relative to uniform fine mesh calculations.« less

  6. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters

    NASA Astrophysics Data System (ADS)

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-01

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and 18F, 99mTc, 131I and 177Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the 99mTc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  7. Implementation and validation of collapsed cone superposition for radiopharmaceutical dosimetry of photon emitters.

    PubMed

    Sanchez-Garcia, Manuel; Gardin, Isabelle; Lebtahi, Rachida; Dieudonné, Arnaud

    2015-10-21

    Two collapsed cone (CC) superposition algorithms have been implemented for radiopharmaceutical dosimetry of photon emitters. The straight CC (SCC) superposition method uses a water energy deposition kernel (EDKw) for each electron, positron and photon components, while the primary and scatter CC (PSCC) superposition method uses different EDKw for primary and once-scattered photons. PSCC was implemented only for photons originating from the nucleus, precluding its application to positron emitters. EDKw are linearly scaled by radiological distance, taking into account tissue density heterogeneities. The implementation was tested on 100, 300 and 600 keV mono-energetic photons and (18)F, (99m)Tc, (131)I and (177)Lu. The kernels were generated using the Monte Carlo codes MCNP and EGSnrc. The validation was performed on 6 phantoms representing interfaces between soft-tissues, lung and bone. The figures of merit were γ (3%, 3 mm) and γ (5%, 5 mm) criterions corresponding to the computation comparison on 80 absorbed doses (AD) points per phantom between Monte Carlo simulations and CC algorithms. PSCC gave better results than SCC for the lowest photon energy (100 keV). For the 3 isotopes computed with PSCC, the percentage of AD points satisfying the γ (5%, 5 mm) criterion was always over 99%. A still good but worse result was found with SCC, since at least 97% of AD-values verified the γ (5%, 5 mm) criterion, except a value of 57% for the (99m)Tc with the lung/bone interface. The CC superposition method for radiopharmaceutical dosimetry is a good alternative to Monte Carlo simulations while reducing computation complexity.

  8. Method of and apparatus for modeling interactions

    DOEpatents

    Budge, Kent G.

    2004-01-13

    A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.

  9. An efficient predictor-corrector-based dynamic mesh method for multi-block structured grid with extremely large deformation and its applications

    NASA Astrophysics Data System (ADS)

    Guo, Tongqing; Chen, Hao; Lu, Zhiliang

    2018-05-01

    Aiming at extremely large deformation, a novel predictor-corrector-based dynamic mesh method for multi-block structured grid is proposed. In this work, the dynamic mesh generation is completed in three steps. At first, some typical dynamic positions are selected and high-quality multi-block grids with the same topology are generated at those positions. Then, Lagrange interpolation method is adopted to predict the dynamic mesh at any dynamic position. Finally, a rapid elastic deforming technique is used to correct the small deviation between the interpolated geometric configuration and the actual instantaneous one. Compared with the traditional methods, the results demonstrate that the present method shows stronger deformation ability and higher dynamic mesh quality.

  10. Mesh-matrix analysis method for electromagnetic launchers

    NASA Technical Reports Server (NTRS)

    Elliott, David G.

    1989-01-01

    The mesh-matrix method is a procedure for calculating the current distribution in the conductors of electromagnetic launchers with coil or flat-plate geometry. Once the current distribution is known the launcher performance can be calculated. The method divides the conductors into parallel current paths, or meshes, and finds the current in each mesh by matrix inversion. The author presents procedures for writing equations for the current and voltage relations for a few meshes to serve as a pattern for writing the computer code. An available subroutine package provides routines for field and flux coefficients and equation solution.

  11. A comparative study on different methods of automatic mesh generation of human femurs.

    PubMed

    Viceconti, M; Bellingeri, L; Cristofolini, L; Toni, A

    1998-01-01

    The aim of this study was to evaluate comparatively five methods for automating mesh generation (AMG) when used to mesh a human femur. The five AMG methods considered were: mapped mesh, which provides hexahedral elements through a direct mapping of the element onto the geometry; tetra mesh, which generates tetrahedral elements from a solid model of the object geometry; voxel mesh which builds cubic 8-node elements directly from CT images; and hexa mesh that automatically generated hexahedral elements from a surface definition of the femur geometry. The various methods were tested against two reference models: a simplified geometric model and a proximal femur model. The first model was useful to assess the inherent accuracy of the meshes created by the AMG methods, since an analytical solution was available for the elastic problem of the simplified geometric model. The femur model was used to test the AMG methods in a more realistic condition. The femoral geometry was derived from a reference model (the "standardized femur") and the finite element analyses predictions were compared to experimental measurements. All methods were evaluated in terms of human and computer effort needed to carry out the complete analysis, and in terms of accuracy. The comparison demonstrated that each tested method deserves attention and may be the best for specific situations. The mapped AMG method requires a significant human effort but is very accurate and it allows a tight control of the mesh structure. The tetra AMG method requires a solid model of the object to be analysed but is widely available and accurate. The hexa AMG method requires a significant computer effort but can also be used on polygonal models and is very accurate. The voxel AMG method requires a huge number of elements to reach an accuracy comparable to that of the other methods, but it does not require any pre-processing of the CT dataset to extract the geometry and in some cases may be the only viable solution.

  12. A Moving Mesh Finite Element Algorithm for Singular Problems in Two and Three Space Dimensions

    NASA Astrophysics Data System (ADS)

    Li, Ruo; Tang, Tao; Zhang, Pingwen

    2002-04-01

    A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In a recent work (2001, J. Comput. Phys.170, 562-588), we extended Dvinsky's method to provide an efficient moving mesh algorithm which compared favorably with the previously proposed schemes in terms of simplicity and reliability. In this work, we will further extend the moving mesh methods based on harmonic maps to deal with mesh adaptation in three space dimensions. In obtaining the variational mesh, we will solve an optimization problem with some appropriate constraints, which is in contrast to the traditional method of solving the Euler-Lagrange equation directly. The key idea of this approach is to update the interior and boundary grids simultaneously, rather than considering them separately. Application of the proposed moving mesh scheme is illustrated with some two- and three-dimensional problems with large solution gradients. The numerical experiments show that our methods can accurately resolve detail features of singular problems in 3D.

  13. A moving mesh finite difference method for equilibrium radiation diffusion equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaobo, E-mail: xwindyb@126.com; Huang, Weizhang, E-mail: whuang@ku.edu; Qiu, Jianxian, E-mail: jxqiu@xmu.edu.cn

    2015-10-01

    An efficient moving mesh finite difference method is developed for the numerical solution of equilibrium radiation diffusion equations in two dimensions. The method is based on the moving mesh partial differential equation approach and moves the mesh continuously in time using a system of meshing partial differential equations. The mesh adaptation is controlled through a Hessian-based monitor function and the so-called equidistribution and alignment principles. Several challenging issues in the numerical solution are addressed. Particularly, the radiation diffusion coefficient depends on the energy density highly nonlinearly. This nonlinearity is treated using a predictor–corrector and lagged diffusion strategy. Moreover, the nonnegativitymore » of the energy density is maintained using a cutoff method which has been known in literature to retain the accuracy and convergence order of finite difference approximation for parabolic equations. Numerical examples with multi-material, multiple spot concentration situations are presented. Numerical results show that the method works well for radiation diffusion equations and can produce numerical solutions of good accuracy. It is also shown that a two-level mesh movement strategy can significantly improve the efficiency of the computation.« less

  14. Reconstruction of transient vibration and sound radiation of an impacted plate using time domain plane wave superposition method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Zhang, Xiao-Zheng; Bi, Chuan-Xing

    2015-05-01

    Time domain plane wave superposition method is extended to reconstruct the transient pressure field radiated by an impacted plate and the normal acceleration of the plate. In the extended method, the pressure measured on the hologram plane is expressed as a superposition of time convolutions between the time-wavenumber normal acceleration spectrum on a virtual source plane and the time domain propagation kernel relating the pressure on the hologram plane to the normal acceleration spectrum on the virtual source plane. By performing an inverse operation, the normal acceleration spectrum on the virtual source plane can be obtained by an iterative solving process, and then taken as the input to reconstruct the whole pressure field and the normal acceleration of the plate. An experiment of a clamped rectangular steel plate impacted by a steel ball is presented. The experimental results demonstrate that the extended method is effective in visualizing the transient vibration and sound radiation of an impacted plate in both time and space domains, thus providing the important information for overall understanding the vibration and sound radiation of the plate.

  15. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  16. Determination of the optimal mesh parameters for Iguassu centrifuge flow and separation calculations

    NASA Astrophysics Data System (ADS)

    Romanihin, S. M.; Tronin, I. V.

    2016-09-01

    We present the method and the results of the determination for optimal computational mesh parameters for axisymmetric modeling of flow and separation in the Iguasu gas centrifuge. The aim of this work was to determine the mesh parameters which provide relatively low computational cost whithout loss of accuracy. We use direct search optimization algorithm to calculate optimal mesh parameters. Obtained parameters were tested by the calculation of the optimal working regime of the Iguasu GC. Separative power calculated using the optimal mesh parameters differs less than 0.5% from the result obtained on the detailed mesh. Presented method can be used to determine optimal mesh parameters of the Iguasu GC with different rotor speeds.

  17. 2D Automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  18. 2D automatic body-fitted structured mesh generation using advancing extraction method

    USDA-ARS?s Scientific Manuscript database

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  19. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  20. In-memory integration of existing software components for parallel adaptive unstructured mesh workflows

    DOE PAGES

    Smith, Cameron W.; Granzow, Brian; Diamond, Gerrett; ...

    2017-01-01

    Unstructured mesh methods, like finite elements and finite volumes, support the effective analysis of complex physical behaviors modeled by partial differential equations over general threedimensional domains. The most reliable and efficient methods apply adaptive procedures with a-posteriori error estimators that indicate where and how the mesh is to be modified. Although adaptive meshes can have two to three orders of magnitude fewer elements than a more uniform mesh for the same level of accuracy, there are many complex simulations where the meshes required are so large that they can only be solved on massively parallel systems.

  1. Array-based Hierarchical Mesh Generation in Parallel

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2015-11-03

    In this paper, we describe an array-based hierarchical mesh generation capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial mesh that can be used for a number of purposes such as multi-level methods to generating large meshes. The capability is developed under the parallel mesh framework “Mesh Oriented dAtaBase” a.k.a MOAB. We describe the underlying data structures and algorithms to generate such hierarchies and present numerical results for computational efficiency and mesh quality. Inmore » conclusion, we also present results to demonstrate the applicability of the developed capability to a multigrid finite-element solver.« less

  2. A Tissue Relevance and Meshing Method for Computing Patient-Specific Anatomical Models in Endoscopic Sinus Surgery Simulation

    NASA Astrophysics Data System (ADS)

    Audette, M. A.; Hertel, I.; Burgert, O.; Strauss, G.

    This paper presents on-going work on a method for determining which subvolumes of a patient-specific tissue map, extracted from CT data of the head, are relevant to simulating endoscopic sinus surgery of that individual, and for decomposing these relevant tissues into triangles and tetrahedra whose mesh size is well controlled. The overall goal is to limit the complexity of the real-time biomechanical interaction while ensuring the clinical relevance of the simulation. Relevant tissues are determined as the union of the pathology present in the patient, of critical tissues deemed to be near the intended surgical path or pathology, and of bone and soft tissue near the intended path, pathology or critical tissues. The processing of tissues, prior to meshing, is based on the Fast Marching method applied under various guises, in a conditional manner that is related to tissue classes. The meshing is based on an adaptation of a meshing method of ours, which combines the Marching Tetrahedra method and the discrete Simplex mesh surface model to produce a topologically faithful surface mesh with well controlled edge and face size as a first stage, and Almost-regular Tetrahedralization of the same prescribed mesh size as a last stage.

  3. An Angular Method with Position Control for Block Mesh Squareness Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, J.; Stillman, D.

    We optimize a target function de ned by angular properties with a position control term for a basic stencil with a block-structured mesh, to improve element squareness in 2D and 3D. Comparison with the condition number method shows that besides a similar mesh quality regarding orthogonality can be achieved as the former does, the new method converges faster and provides a more uniform global mesh spacing in our numerical tests.

  4. A flexible nonlinear diffusion acceleration method for the S N transport equations discretized with discontinuous finite elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick

    This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less

  5. A flexible nonlinear diffusion acceleration method for the S N transport equations discretized with discontinuous finite elements

    DOE PAGES

    Schunert, Sebastian; Wang, Yaqi; Gleicher, Frederick; ...

    2017-02-21

    This paper presents a flexible nonlinear diffusion acceleration (NDA) method that discretizes both the S N transport equation and the diffusion equation using the discontinuous finite element method (DFEM). The method is flexible in that the diffusion equation can be discretized on a coarser mesh with the only restriction that it is nested within the transport mesh and the FEM shape function orders of the two equations can be different. The consistency of the transport and diffusion solutions at convergence is defined by using a projection operator mapping the transport into the diffusion FEM space. The diffusion weak form ismore » based on the modified incomplete interior penalty (MIP) diffusion DFEM discretization that is extended by volumetric drift, interior face, and boundary closure terms. In contrast to commonly used coarse mesh finite difference (CMFD) methods, the presented NDA method uses a full FEM discretized diffusion equation for acceleration. Suitable projection and prolongation operators arise naturally from the FEM framework. Via Fourier analysis and numerical experiments for a one-group, fixed source problem the following properties of the NDA method are established for structured quadrilateral meshes: (1) the presented method is unconditionally stable and effective in the presence of mild material heterogeneities if the same mesh and identical shape functions either of the bilinear or biquadratic type are used, (2) the NDA method remains unconditionally stable in the presence of strong heterogeneities, (3) the NDA method with bilinear elements extends the range of effectiveness and stability by a factor of two when compared to CMFD if a coarser diffusion mesh is selected. In addition, the method is tested for solving the C5G7 multigroup, eigenvalue problem using coarse and fine mesh acceleration. Finally, while NDA does not offer an advantage over CMFD for fine mesh acceleration, it reduces the iteration count required for convergence by almost a factor of two in the case of coarse mesh acceleration.« less

  6. Computational performance of Free Mesh Method applied to continuum mechanics problems

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics. PMID:21558753

  7. Free Mesh Method: fundamental conception, algorithms and accuracy study

    PubMed Central

    YAGAWA, Genki

    2011-01-01

    The finite element method (FEM) has been commonly employed in a variety of fields as a computer simulation method to solve such problems as solid, fluid, electro-magnetic phenomena and so on. However, creation of a quality mesh for the problem domain is a prerequisite when using FEM, which becomes a major part of the cost of a simulation. It is natural that the concept of meshless method has evolved. The free mesh method (FMM) is among the typical meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, especially on parallel processors. FMM is an efficient node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm for the finite element calculations. In this paper, FMM and its variation are reviewed focusing on their fundamental conception, algorithms and accuracy. PMID:21558752

  8. A tuned mesh-generation strategy for image representation based on data-dependent triangulation.

    PubMed

    Li, Ping; Adams, Michael D

    2013-05-01

    A mesh-generation framework for image representation based on data-dependent triangulation is proposed. The proposed framework is a modified version of the frameworks of Rippa and Garland and Heckbert that facilitates the development of more effective mesh-generation methods. As the proposed framework has several free parameters, the effects of different choices of these parameters on mesh quality are studied, leading to the recommendation of a particular set of choices for these parameters. A mesh-generation method is then introduced that employs the proposed framework with these best parameter choices. This method is demonstrated to produce meshes of higher quality (both in terms of squared error and subjectively) than those generated by several competing approaches, at a relatively modest computational and memory cost.

  9. LBMD : a layer-based mesh data structure tailored for generic API infrastructures.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeida, Mohamed S.; Knupp, Patrick Michael

    2010-11-01

    A new mesh data structure is introduced for the purpose of mesh processing in Application Programming Interface (API) infrastructures. This data structure utilizes a reduced mesh representation to increase its ability to handle significantly larger meshes compared to full mesh representation. In spite of the reduced representation, each mesh entity (vertex, edge, face, and region) is represented using a unique handle, with no extra storage cost, which is a crucial requirement in most API libraries. The concept of mesh layers makes the data structure more flexible for mesh generation and mesh modification operations. This flexibility can have a favorable impactmore » in solver based queries of finite volume and multigrid methods. The capabilities of LBMD make it even more attractive for parallel implementations using Message Passing Interface (MPI) or Graphics Processing Units (GPUs). The data structure is associated with a new classification method to relate mesh entities to their corresponding geometrical entities. The classification technique stores the related information at the node level without introducing any ambiguities. Several examples are presented to illustrate the strength of this new data structure.« less

  10. Adsorption and kinetics study of manganesse (II) in waste water using vertical column method by sugar cane bagasse

    NASA Astrophysics Data System (ADS)

    Zaini, H.; Abubakar, S.; Rihayat, T.; Suryani, S.

    2018-03-01

    Removal of heavy metal content in wastewater has been largely done by various methods. One effective and efficient method is the adsorption method. This study aims to reduce manganese (II) content in wastewater based on column adsorption method using absorbent material from bagasse. The fixed variable consisted of 50 g adsorbent, 10 liter adsorbate volume, flow rate of 7 liters / min. Independent variable of particle size with variation 10 – 30 mesh and contact time with variation 0 - 240 min and respon variable concentration of adsorbate (ppm), pH and conductivity. The results showed that the adsorption process of manganese metal is influenced by particle size and contact time. The adsorption kinetics takes place according to pseudo-second order kinetics with an equilibrium adsorption capacity (qe: mg / g) for 10 mesh adsorbent particles: 0.8947; 20 mesh adsorbent particles: 0.4332 and 30 mesh adsorbent particles: 1.0161, respectively. Highest removal efficience for 10 mesh adsorbent particles: 49.22% on contact time 60 min; 20 mesh adsorbent particles: 35,25% on contact time 180 min and particle 30 mesh adsorbent particles: 51,95% on contact time 150 min.

  11. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  12. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  13. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    NASA Astrophysics Data System (ADS)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  14. Toward quantum superposition of living organisms

    NASA Astrophysics Data System (ADS)

    Romero-Isart, Oriol; Juan, Mathieu L.; Quidant, Romain; Cirac, J. Ignacio

    2010-03-01

    The most striking feature of quantum mechanics is the existence of superposition states, where an object appears to be in different situations at the same time. The existence of such states has been previously tested with small objects, such as atoms, ions, electrons and photons (Zoller et al 2005 Eur. Phys. J. D 36 203-28), and even with molecules (Arndt et al 1999 Nature 401 680-2). More recently, it has been shown that it is possible to create superpositions of collections of photons (Deléglise et al 2008 Nature 455 510-14), atoms (Hammerer et al 2008 arXiv:0807.3358) or Cooper pairs (Friedman et al 2000 Nature 406 43-6). Very recent progress in optomechanical systems may soon allow us to create superpositions of even larger objects, such as micro-sized mirrors or cantilevers (Marshall et al 2003 Phys. Rev. Lett. 91 130401; Kippenberg and Vahala 2008 Science 321 1172-6 Marquardt and Girvin 2009 Physics 2 40; Favero and Karrai 2009 Nature Photon. 3 201-5), and thus to test quantum mechanical phenomena at larger scales. Here we propose a method to cool down and create quantum superpositions of the motion of sub-wavelength, arbitrarily shaped dielectric objects trapped inside a high-finesse cavity at a very low pressure. Our method is ideally suited for the smallest living organisms, such as viruses, which survive under low-vacuum pressures (Rothschild and Mancinelli 2001 Nature 406 1092-101) and optically behave as dielectric objects (Ashkin and Dziedzic 1987 Science 235 1517-20). This opens up the possibility of testing the quantum nature of living organisms by creating quantum superposition states in very much the same spirit as the original Schrödinger's cat 'gedanken' paradigm (Schrödinger 1935 Naturwissenschaften 23 807-12, 823-8, 844-9). We anticipate that our paper will be a starting point for experimentally addressing fundamental questions, such as the role of life and consciousness in quantum mechanics.

  15. Method of generating a surface mesh

    DOEpatents

    Shepherd, Jason F [Albuquerque, NM; Benzley, Steven [Provo, UT; Grover, Benjamin T [Tracy, CA

    2008-03-04

    A method and machine-readable medium provide a technique to generate and modify a quadrilateral finite element surface mesh using dual creation and modification. After generating a dual of a surface (mesh), a predetermined algorithm may be followed to generate and modify a surface mesh of quadrilateral elements. The predetermined algorithm may include the steps of generating two-dimensional cell regions in dual space, determining existing nodes in primal space, generating new nodes in the dual space, and connecting nodes to form the quadrilateral elements (faces) for the generated and modifiable surface mesh.

  16. A spring system method for a mesh generation problem

    NASA Astrophysics Data System (ADS)

    Romanov, A.

    2018-04-01

    A new direct method for the 2d-mesh generation for a simply-connected domain using a spring system is observed. The method can be used with other methods to modify a mesh for growing solid problems. Advantages and disadvantages of the method are shown. Different types of boundary conditions are explored. The results of modelling for different target domains are given. Some applications for composite materials are studied.

  17. Superposition-Based Analysis of First-Order Probabilistic Timed Automata

    NASA Astrophysics Data System (ADS)

    Fietzke, Arnaud; Hermanns, Holger; Weidenbach, Christoph

    This paper discusses the analysis of first-order probabilistic timed automata (FPTA) by a combination of hierarchic first-order superposition-based theorem proving and probabilistic model checking. We develop the overall semantics of FPTAs and prove soundness and completeness of our method for reachability properties. Basically, we decompose FPTAs into their time plus first-order logic aspects on the one hand, and their probabilistic aspects on the other hand. Then we exploit the time plus first-order behavior by hierarchic superposition over linear arithmetic. The result of this analysis is the basis for the construction of a reachability equivalent (to the original FPTA) probabilistic timed automaton to which probabilistic model checking is finally applied. The hierarchic superposition calculus required for the analysis is sound and complete on the first-order formulas generated from FPTAs. It even works well in practice. We illustrate the potential behind it with a real-life DHCP protocol example, which we analyze by means of tool chain support.

  18. Comparison of three different methods for effective introduction of platelet-rich plasma on PLGA woven mesh.

    PubMed

    Lee, Ji-Hye; Nam, Jinwoo; Kim, Hee Joong; Yoo, Jeong Joon

    2015-03-11

    For successful tissue regeneration, effective cell delivery to defect site is very important. Various types of polymer biomaterials have been developed and applied for effective cell delivery. PLGA (poly lactic-co-glycolic acid), a synthetic polymer, is a commercially available and FDA approved material. Platelet-rich plasma (PRP) is an autologous growth factor cocktail containing various growth factors including PDGF, TGFβ-1 and BMPs, and has shown positive effects on cell behaviors. We hypothesized that PRP pretreatment on PLGA mesh using different methods would cause different patterns of platelet adhesion and stages which would modulate cell adhesion and proliferation on the PLGA mesh. In this study, we pretreated PRP on PLGA using three different methods including simple dripping (SD), dynamic oscillation (DO) and centrifugation (CE), then observed the amount of adhered platelets and their activation stage distribution. The highest amount of platelets was observed on CE mesh and calcium treated CE mesh. Moreover, calcium addition after PRP coating triggered dramatic activation of platelets which showed large and flat morphologies of platelets with rich fibrin networks. Human chondrocytes (hCs) and human bone marrow stromal cells (hBMSCs) were next cultured on PRP-pretreated PLGA meshes using different preparation methods. CE mesh showed a significant increase in the initial cell adhesion of hCs and proliferation of hBMSCs compared with SD and DO meshes. The results demonstrated that the centrifugation method can be considered as a promising coating method to introduce PRP on PLGA polymeric material which could improve cell-material interaction using a simple method.

  19. 50 CFR 648.91 - Monkfish regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... restrictions on gear and methods of fishing. 648.91 Section 648.91 Wildlife and Fisheries FISHERY CONSERVATION... § 648.91 Monkfish regulated mesh areas and restrictions on gear and methods of fishing. All vessels fishing for, possessing or landing monkfish must comply with the following minimum mesh size, gear, and...

  20. 50 CFR 648.91 - Monkfish regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... restrictions on gear and methods of fishing. 648.91 Section 648.91 Wildlife and Fisheries FISHERY CONSERVATION... § 648.91 Monkfish regulated mesh areas and restrictions on gear and methods of fishing. All vessels fishing for, possessing or landing monkfish must comply with the following minimum mesh size, gear, and...

  1. Dynamic Mesh Adaptation for Front Evolution Using Discontinuous Galerkin Based Weighted Condition Number Mesh Relaxation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2016-06-21

    A new mesh smoothing method designed to cluster mesh cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function being computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered elds, such as amore » volume fraction or index function, is provided. Results show that the low-order level set works equally well for the weight function as the actual level set. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Dynamic cases for moving interfaces are presented to demonstrate the method's potential usefulness to arbitrary Lagrangian Eulerian (ALE) methods.« less

  2. Superposition and detection of two helical beams for optical orbital angular momentum communication

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst

    2008-07-01

    A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).

  3. Solving Modal Equations of Motion with Initial Conditions Using MSC/NASTRAN DMAP. Part 2; Coupled Versus Uncoupled Integration

    NASA Technical Reports Server (NTRS)

    Barnett, Alan R.; Ibrahim, Omar M.; Abdallah, Ayman A.; Sullivan, Timothy L.

    1993-01-01

    By utilizing MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) in an existing NASA Lewis Research Center coupled loads methodology, solving modal equations of motion with initial conditions is possible using either coupled (Newmark-Beta) or uncoupled (exact mode superposition) integration available within module TRD1. Both the coupled and newly developed exact mode superposition methods have been used to perform transient analyses of various space systems. However, experience has shown that in most cases, significant time savings are realized when the equations of motion are integrated using the uncoupled solver instead of the coupled solver. Through the results of a real-world engineering analysis, advantages of using the exact mode superposition methodology are illustrated.

  4. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Berger, Marsha J.; Murman, Scott M.

    2003-01-01

    The proposed paper presents a variety novel uses of Space-Filling-Curves (SFCs) for Cartesian mesh methods in 0. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, most are applicable on general body-fitted meshes -both structured and unstructured. We demonstrate the use of single O(N log N) SFC-based reordering to produce single-pass (O(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations. Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 512 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 10% of ideal even with only around 50,000 cells in each subdomain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with O(max(M,N)) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for finite-difference-based gradient design methods.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyrya, Vitaliy; Mourad, Hashem Mohamed

    We present a family of C1-continuous high-order Virtual Element Methods for Poisson-Kirchho plate bending problem. The convergence of the methods is tested on a variety of meshes including rectangular, quadrilateral, and meshes obtained by edge removal (i.e. highly irregular meshes). The convergence rates are presented for all of these tests.

  6. SUPERPOSITION OF POLYTROPES IN THE INNER HELIOSHEATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livadiotis, G., E-mail: glivadiotis@swri.edu

    2016-03-15

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density–temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log–log scale is now generalized to a concave-downward parabola that is able to describe themore » observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ∼ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubrovsky, V. G.; Topovsky, A. V.

    New exact solutions, nonstationary and stationary, of Veselov-Novikov (VN) equation in the forms of simple nonlinear and linear superpositions of arbitrary number N of exact special solutions u{sup (n)}, n= 1, Horizontal-Ellipsis , N are constructed via Zakharov and Manakov {partial_derivative}-dressing method. Simple nonlinear superpositions are represented up to a constant by the sums of solutions u{sup (n)} and calculated by {partial_derivative}-dressing on nonzero energy level of the first auxiliary linear problem, i.e., 2D stationary Schroedinger equation. It is remarkable that in the zero energy limit simple nonlinear superpositions convert to linear ones in the form of the sums ofmore » special solutions u{sup (n)}. It is shown that the sums u=u{sup (k{sub 1})}+...+u{sup (k{sub m})}, 1 Less-Than-Or-Slanted-Equal-To k{sub 1} < k{sub 2} < Horizontal-Ellipsis < k{sub m} Less-Than-Or-Slanted-Equal-To N of arbitrary subsets of these solutions are also exact solutions of VN equation. The presented exact solutions include as superpositions of special line solitons and also superpositions of plane wave type singular periodic solutions. By construction these exact solutions represent also new exact transparent potentials of 2D stationary Schroedinger equation and can serve as model potentials for electrons in planar structures of modern electronics.« less

  8. Superposition of Polytropes in the Inner Heliosheath

    NASA Astrophysics Data System (ADS)

    Livadiotis, G.

    2016-03-01

    This paper presents a possible generalization of the equation of state and Bernoulli's integral when a superposition of polytropic processes applies in space and astrophysical plasmas. The theory of polytropic thermodynamic processes for a fixed polytropic index is extended for a superposition of polytropic indices. In general, the superposition may be described by any distribution of polytropic indices, but emphasis is placed on a Gaussian distribution. The polytropic density-temperature relation has been used in numerous analyses of space plasma data. This linear relation on a log-log scale is now generalized to a concave-downward parabola that is able to describe the observations better. The model of the Gaussian superposition of polytropes is successfully applied in the proton plasma of the inner heliosheath. The estimated mean polytropic index is near zero, indicating the dominance of isobaric thermodynamic processes in the sheath, similar to other previously published analyses. By computing Bernoulli's integral and applying its conservation along the equator of the inner heliosheath, the magnetic field in the inner heliosheath is estimated, B ˜ 2.29 ± 0.16 μG. The constructed normalized histogram of the values of the magnetic field is similar to that derived from a different method that uses the concept of large-scale quantization, bringing incredible insights to this novel theory.

  9. On sufficient statistics of least-squares superposition of vector sets.

    PubMed

    Konagurthu, Arun S; Kasarapu, Parthan; Allison, Lloyd; Collier, James H; Lesk, Arthur M

    2015-06-01

    The problem of superposition of two corresponding vector sets by minimizing their sum-of-squares error under orthogonal transformation is a fundamental task in many areas of science, notably structural molecular biology. This problem can be solved exactly using an algorithm whose time complexity grows linearly with the number of correspondences. This efficient solution has facilitated the widespread use of the superposition task, particularly in studies involving macromolecular structures. This article formally derives a set of sufficient statistics for the least-squares superposition problem. These statistics are additive. This permits a highly efficient (constant time) computation of superpositions (and sufficient statistics) of vector sets that are composed from its constituent vector sets under addition or deletion operation, where the sufficient statistics of the constituent sets are already known (that is, the constituent vector sets have been previously superposed). This results in a drastic improvement in the run time of the methods that commonly superpose vector sets under addition or deletion operations, where previously these operations were carried out ab initio (ignoring the sufficient statistics). We experimentally demonstrate the improvement our work offers in the context of protein structural alignment programs that assemble a reliable structural alignment from well-fitting (substructural) fragment pairs. A C++ library for this task is available online under an open-source license.

  10. Speckle noise reduction in digital holography by slightly rotating the object

    NASA Astrophysics Data System (ADS)

    Herrera-Ramirez, Jorge; Hincapie-Zuluaga, Diego Andrés; Garcia-Sucerquia, Jorge

    2016-12-01

    This work shows the realization of speckle reduction in the numerical reconstruction of digitally recorded holograms by the superposition of multiple slightly rotated digital holographic images of the object. The superposition of T uncorrelated holographic images reduces the contrast of the speckle noise of the image following the expected 1/√{T} law. The effect of the method on the borders of the resulting image is evaluated by quantifying the utilization of the dynamic range or the contrast between the white and black areas of a regular die. Experimental results validate the feasibility of the proposed method.

  11. Mesh Deformation Based on Fully Stressed Design: The Method and Two-Dimensional Examples

    NASA Technical Reports Server (NTRS)

    Hsu, Su-Yuen; Chang, Chau-Lyan

    2007-01-01

    Mesh deformation in response to redefined boundary geometry is a frequently encountered task in shape optimization and analysis of fluid-structure interaction. We propose a simple and concise method for deforming meshes defined with three-node triangular or four-node tetrahedral elements. The mesh deformation method is suitable for large boundary movement. The approach requires two consecutive linear elastic finite-element analyses of an isotropic continuum using a prescribed displacement at the mesh boundaries. The first analysis is performed with homogeneous elastic property and the second with inhomogeneous elastic property. The fully stressed design is employed with a vanishing Poisson s ratio and a proposed form of equivalent strain (modified Tresca equivalent strain) to calculate, from the strain result of the first analysis, the element-specific Young s modulus for the second analysis. The theoretical aspect of the proposed method, its convenient numerical implementation using a typical linear elastic finite-element code in conjunction with very minor extra coding for data processing, and results for examples of large deformation of two-dimensional meshes are presented in this paper. KEY WORDS: Mesh deformation, shape optimization, fluid-structure interaction, fully stressed design, finite-element analysis, linear elasticity, strain failure, equivalent strain, Tresca failure criterion

  12. Lattice Cleaving: A Multimaterial Tetrahedral Meshing Algorithm with Guarantees

    PubMed Central

    Bronson, Jonathan; Levine, Joshua A.; Whitaker, Ross

    2014-01-01

    We introduce a new algorithm for generating tetrahedral meshes that conform to physical boundaries in volumetric domains consisting of multiple materials. The proposed method allows for an arbitrary number of materials, produces high-quality tetrahedral meshes with upper and lower bounds on dihedral angles, and guarantees geometric fidelity. Moreover, the method is combinatoric so its implementation enables rapid mesh construction. These meshes are structured in a way that also allows grading, to reduce element counts in regions of homogeneity. Additionally, we provide proofs showing that both element quality and geometric fidelity are bounded using this approach. PMID:24356365

  13. Simulating Soft Shadows with Graphics Hardware,

    DTIC Science & Technology

    1997-01-15

    This radiance texture is analogous to the mesh of radiosity values computed in a radiosity algorithm. Unlike a radiosity algorithm, however, our...discretely. Several researchers have explored continuous visibility methods for soft shadow computation and radiosity mesh generation. With this approach...times of several seconds [9]. Most radiosity methods discretize each surface into a mesh of elements and then use discrete methods such as ray

  14. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  15. Dynamic mesh adaptation for front evolution using discontinuous Galerkin based weighted condition number relaxation

    DOE PAGES

    Greene, Patrick T.; Schofield, Samuel P.; Nourgaliev, Robert

    2017-01-27

    A new mesh smoothing method designed to cluster cells near a dynamically evolving interface is presented. The method is based on weighted condition number mesh relaxation with the weight function computed from a level set representation of the interface. The weight function is expressed as a Taylor series based discontinuous Galerkin projection, which makes the computation of the derivatives of the weight function needed during the condition number optimization process a trivial matter. For cases when a level set is not available, a fast method for generating a low-order level set from discrete cell-centered fields, such as a volume fractionmore » or index function, is provided. Results show that the low-order level set works equally well as the actual level set for mesh smoothing. Meshes generated for a number of interface geometries are presented, including cases with multiple level sets. Lastly, dynamic cases with moving interfaces show the new method is capable of maintaining a desired resolution near the interface with an acceptable number of relaxation iterations per time step, which demonstrates the method's potential to be used as a mesh relaxer for arbitrary Lagrangian Eulerian (ALE) methods.« less

  16. Effect of boundary representation on viscous, separated flows in a discontinuous-Galerkin Navier-Stokes solver

    NASA Astrophysics Data System (ADS)

    Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.

    2016-08-01

    The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.

  17. Unstructured and adaptive mesh generation for high Reynolds number viscous flows

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1991-01-01

    A method for generating and adaptively refining a highly stretched unstructured mesh suitable for the computation of high-Reynolds-number viscous flows about arbitrary two-dimensional geometries was developed. The method is based on the Delaunay triangulation of a predetermined set of points and employs a local mapping in order to achieve the high stretching rates required in the boundary-layer and wake regions. The initial mesh-point distribution is determined in a geometry-adaptive manner which clusters points in regions of high curvature and sharp corners. Adaptive mesh refinement is achieved by adding new points in regions of large flow gradients, and locally retriangulating; thus, obviating the need for global mesh regeneration. Initial and adapted meshes about complex multi-element airfoil geometries are shown and compressible flow solutions are computed on these meshes.

  18. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE PAGES

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas; ...

    2018-01-30

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  19. Scalable hierarchical PDE sampler for generating spatially correlated random fields using nonmatching meshes: Scalable hierarchical PDE sampler using nonmatching meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Zulian, Patrick; Benson, Thomas

    This work describes a domain embedding technique between two nonmatching meshes used for generating realizations of spatially correlated random fields with applications to large-scale sampling-based uncertainty quantification. The goal is to apply the multilevel Monte Carlo (MLMC) method for the quantification of output uncertainties of PDEs with random input coefficients on general and unstructured computational domains. We propose a highly scalable, hierarchical sampling method to generate realizations of a Gaussian random field on a given unstructured mesh by solving a reaction–diffusion PDE with a stochastic right-hand side. The stochastic PDE is discretized using the mixed finite element method on anmore » embedded domain with a structured mesh, and then, the solution is projected onto the unstructured mesh. This work describes implementation details on how to efficiently transfer data from the structured and unstructured meshes at coarse levels, assuming that this can be done efficiently on the finest level. We investigate the efficiency and parallel scalability of the technique for the scalable generation of Gaussian random fields in three dimensions. An application of the MLMC method is presented for quantifying uncertainties of subsurface flow problems. Here, we demonstrate the scalability of the sampling method with nonmatching mesh embedding, coupled with a parallel forward model problem solver, for large-scale 3D MLMC simulations with up to 1.9·109 unknowns.« less

  20. A multi-dimensional high-order DG-ALE method based on gas-kinetic theory with application to oscillating bodies

    NASA Astrophysics Data System (ADS)

    Ren, Xiaodong; Xu, Kun; Shyy, Wei

    2016-07-01

    This paper presents a multi-dimensional high-order discontinuous Galerkin (DG) method in an arbitrary Lagrangian-Eulerian (ALE) formulation to simulate flows over variable domains with moving and deforming meshes. It is an extension of the gas-kinetic DG method proposed by the authors for static domains (X. Ren et al., 2015 [22]). A moving mesh gas kinetic DG method is proposed for both inviscid and viscous flow computations. A flux integration method across a translating and deforming cell interface has been constructed. Differently from the previous ALE-type gas kinetic method with piecewise constant mesh velocity at each cell interface within each time step, the mesh velocity variation inside a cell and the mesh moving and rotating at a cell interface have been accounted for in the finite element framework. As a result, the current scheme is applicable for any kind of mesh movement, such as translation, rotation, and deformation. The accuracy and robustness of the scheme have been improved significantly in the oscillating airfoil calculations. All computations are conducted in a physical domain rather than in a reference domain, and the basis functions move with the grid movement. Therefore, the numerical scheme can preserve the uniform flow automatically, and satisfy the geometric conservation law (GCL). The numerical accuracy can be maintained even for a largely moving and deforming mesh. Several test cases are presented to demonstrate the performance of the gas-kinetic DG-ALE method.

  1. Contact stresses in gear teeth: A new method of analysis

    NASA Technical Reports Server (NTRS)

    Somprakit, Paisan; Huston, Ronald L.; Oswald, Fred B.

    1991-01-01

    A new, innovative procedure called point load superposition for determining the contact stresses in mating gear teeth. It is believed that this procedure will greatly extend both the range of applicability and the accuracy of gear contact stress analysis. Point load superposition is based upon fundamental solutions from the theory of elasticity. It is an iterative numerical procedure which has distinct advantages over the classical Hertz method, the finite element method, and over existing applications with the boundary element method. Specifically, friction and sliding effects, which are either excluded from or difficult to study with the classical methods, are routinely handled with the new procedure. Presented here are the basic theory and the algorithms. Several examples are given. Results are consistent with those of the classical theories. Applications to spur gears are discussed.

  2. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  3. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  4. Robust moving mesh algorithms for hybrid stretched meshes: Application to moving boundaries problems

    NASA Astrophysics Data System (ADS)

    Landry, Jonathan; Soulaïmani, Azzeddine; Luke, Edward; Ben Haj Ali, Amine

    2016-12-01

    A robust Mesh-Mover Algorithm (MMA) approach is designed to adapt meshes of moving boundaries problems. A new methodology is developed from the best combination of well-known algorithms in order to preserve the quality of initial meshes. In most situations, MMAs distribute mesh deformation while preserving a good mesh quality. However, invalid meshes are generated when the motion is complex and/or involves multiple bodies. After studying a few MMA limitations, we propose the following approach: use the Inverse Distance Weighting (IDW) function to produce the displacement field, then apply the Geometric Element Transformation Method (GETMe) smoothing algorithms to improve the resulting mesh quality, and use an untangler to revert negative elements. The proposed approach has been proven efficient to adapt meshes for various realistic aerodynamic motions: a symmetric wing that has suffered large tip bending and twisting and the high-lift components of a swept wing that has moved to different flight stages. Finally, the fluid flow problem has been solved on meshes that have moved and they have produced results close to experimental ones. However, for situations where moving boundaries are too close to each other, more improvements need to be made or other approaches should be taken, such as an overset grid method.

  5. The Role of Chronic Mesh Infection in Delayed-Onset Vaginal Mesh Complications or Recurrent Urinary Tract Infections: Results From Explanted Mesh Cultures.

    PubMed

    Mellano, Erin M; Nakamura, Leah Y; Choi, Judy M; Kang, Diana C; Grisales, Tamara; Raz, Shlomo; Rodriguez, Larissa V

    2016-01-01

    Vaginal mesh complications necessitating excision are increasingly prevalent. We aim to study whether subclinical chronically infected mesh contributes to the development of delayed-onset mesh complications or recurrent urinary tract infections (UTIs). Women undergoing mesh removal from August 2013 through May 2014 were identified by surgical code for vaginal mesh removal. Only women undergoing removal of anti-incontinence mesh were included. Exclusion criteria included any women undergoing simultaneous prolapse mesh removal. We abstracted preoperative and postoperative information from the medical record and compared mesh culture results from patients with and without mesh extrusion, de novo recurrent UTIs, and delayed-onset pain. One hundred seven women with only anti-incontinence mesh removed were included in the analysis. Onset of complications after mesh placement was within the first 6 months in 70 (65%) of 107 and delayed (≥6 months) in 37 (35%) of 107. A positive culture from the explanted mesh was obtained from 82 (77%) of 107 patients, and 40 (37%) of 107 were positive with potential pathogens. There were no significant differences in culture results when comparing patients with delayed-onset versus immediate pain, extrusion with no extrusion, and de novo recurrent UTIs with no infections. In this large cohort of patients with mesh removed for a diverse array of complications, cultures of the explanted vaginal mesh demonstrate frequent low-density bacterial colonization. We found no differences in culture results from women with delayed-onset pain versus acute pain, vaginal mesh extrusions versus no extrusions, or recurrent UTIs using standard culture methods. Chronic prosthetic infections in other areas of medicine are associated with bacterial biofilms, which are resistant to typical culture techniques. Further studies using culture-independent methods are needed to investigate the potential role of chronic bacterial infections in delayed vaginal mesh complications.

  6. Applications of Space-Filling-Curves to Cartesian Methods for CFD

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Murman, S. M.; Berger, M. J.

    2003-01-01

    This paper presents a variety of novel uses of space-filling-curves (SFCs) for Cartesian mesh methods in CFD. While these techniques will be demonstrated using non-body-fitted Cartesian meshes, many are applicable on general body-fitted meshes-both structured and unstructured. We demonstrate the use of single theta(N log N) SFC-based reordering to produce single-pass (theta(N)) algorithms for mesh partitioning, multigrid coarsening, and inter-mesh interpolation. The intermesh interpolation operator has many practical applications including warm starts on modified geometry, or as an inter-grid transfer operator on remeshed regions in moving-body simulations Exploiting the compact construction of these operators, we further show that these algorithms are highly amenable to parallelization. Examples using the SFC-based mesh partitioner show nearly linear speedup to 640 CPUs even when using multigrid as a smoother. Partition statistics are presented showing that the SFC partitions are, on-average, within 15% of ideal even with only around 50,000 cells in each sub-domain. The inter-mesh interpolation operator also has linear asymptotic complexity and can be used to map a solution with N unknowns to another mesh with M unknowns with theta(M + N) operations. This capability is demonstrated both on moving-body simulations and in mapping solutions to perturbed meshes for control surface deflection or finite-difference-based gradient design methods.

  7. Design of an essentially non-oscillatory reconstruction procedure on finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, R.

    1991-01-01

    An essentially non-oscillatory reconstruction for functions defined on finite-element type meshes was designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitrary meshes and the reconstruction of a function from its average in the control volumes surrounding the nodes of the mesh. Concerning the first problem, we have studied the behavior of the highest coefficients of the Lagrange interpolation function which may admit discontinuities of locally regular curves. This enables us to choose the best stencil for the interpolation. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, because of the very nature of the mesh, the only method that may work is the so called reconstruction via deconvolution method. Unfortunately, it is well suited only for regular meshes as we show, but we also show how to overcome this difficulty. The global method has the expected order of accuracy but is conservative up to a high order quadrature formula only. Some numerical examples are given which demonstrate the efficiency of the method.

  8. Moving Particles Through a Finite Element Mesh

    PubMed Central

    Peskin, Adele P.; Hardin, Gary R.

    1998-01-01

    We present a new numerical technique for modeling the flow around multiple objects moving in a fluid. The method tracks the dynamic interaction between each particle and the fluid. The movements of the fluid and the object are directly coupled. A background mesh is designed to fit the geometry of the overall domain. The mesh is designed independently of the presence of the particles except in terms of how fine it must be to track particles of a given size. Each particle is represented by a geometric figure that describes its boundary. This figure overlies the mesh. Nodes are added to the mesh where the particle boundaries intersect the background mesh, increasing the number of nodes contained in each element whose boundary is intersected. These additional nodes are then used to describe and track the particle in the numerical scheme. Appropriate element shape functions are defined to approximate the solution on the elements with extra nodes. The particles are moved through the mesh by moving only the overlying nodes defining the particles. The regular finite element grid remains unchanged. In this method, the mesh does not distort as the particles move. Instead, only the placement of particle-defining nodes changes as the particles move. Element shape functions are updated as the nodes move through the elements. This method is especially suited for models of moderate numbers of moderate-size particles, where the details of the fluid-particle coupling are important. Both the complications of creating finite element meshes around appreciable numbers of particles, and extensive remeshing upon movement of the particles are simplified in this method. PMID:28009377

  9. New Software Developments for Quality Mesh Generation and Optimization from Biomedical Imaging Data

    PubMed Central

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2013-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. PMID:24252469

  10. Mesh quality oriented 3D geometric vascular modeling based on parallel transport frame.

    PubMed

    Guo, Jixiang; Li, Shun; Chui, Yim Pan; Qin, Jing; Heng, Pheng Ann

    2013-08-01

    While a number of methods have been proposed to reconstruct geometrically and topologically accurate 3D vascular models from medical images, little attention has been paid to constantly maintain high mesh quality of these models during the reconstruction procedure, which is essential for many subsequent applications such as simulation-based surgical training and planning. We propose a set of methods to bridge this gap based on parallel transport frame. An improved bifurcation modeling method and two novel trifurcation modeling methods are developed based on 3D Bézier curve segments in order to ensure the continuous surface transition at furcations. In addition, a frame blending scheme is implemented to solve the twisting problem caused by frame mismatch of two successive furcations. A curvature based adaptive sampling scheme combined with a mesh quality guided frame tilting algorithm is developed to construct an evenly distributed, non-concave and self-intersection free surface mesh for vessels with distinct radius and high curvature. Extensive experiments demonstrate that our methodology can generate vascular models with better mesh quality than previous methods in terms of surface mesh quality criteria. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  12. Simultaneous classification of Oranges and Apples Using Grover's and Ventura' Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Kumar, Sandeep

    2017-08-01

    In the present paper, simultaneous classification of Orange and Apple has been carried out using both Grover's iterative algorithm (Grover 1996) and Ventura's model (Ventura and Martinez, Inf. Sci. 124, 273-296, 2000) taking different superposition of two- pattern start state containing Orange and Apple both, one- pattern start state containing Apple as search state and another one- pattern start state containing Orange as search state. It has been shown that the exclusion superposition is the most suitable two- pattern search state for simultaneous classification of pattern associated with Apples and Oranges and the superposition of phase-invariance are the best choice as the respective search state based on one -pattern start-states in both Grover's and Ventura's methods of classifications of patterns.

  13. Third-order accurate conservative method on unstructured meshes for gasdynamic simulations

    NASA Astrophysics Data System (ADS)

    Shirobokov, D. A.

    2017-04-01

    A third-order accurate finite-volume method on unstructured meshes is proposed for solving viscous gasdynamic problems. The method is described as applied to the advection equation. The accuracy of the method is verified by computing the evolution of a vortex on meshes of various degrees of detail with variously shaped cells. Additionally, unsteady flows around a cylinder and a symmetric airfoil are computed. The numerical results are presented in the form of plots and tables.

  14. An adaptive mesh refinement-multiphase lattice Boltzmann flux solver for simulation of complex binary fluid flows

    NASA Astrophysics Data System (ADS)

    Yuan, H. Z.; Wang, Y.; Shu, C.

    2017-12-01

    This paper presents an adaptive mesh refinement-multiphase lattice Boltzmann flux solver (AMR-MLBFS) for effective simulation of complex binary fluid flows at large density ratios. In this method, an AMR algorithm is proposed by introducing a simple indicator on the root block for grid refinement and two possible statuses for each block. Unlike available block-structured AMR methods, which refine their mesh by spawning or removing four child blocks simultaneously, the present method is able to refine its mesh locally by spawning or removing one to four child blocks independently when the refinement indicator is triggered. As a result, the AMR mesh used in this work can be more focused on the flow region near the phase interface and its size is further reduced. In each block of mesh, the recently proposed MLBFS is applied for the solution of the flow field and the level-set method is used for capturing the fluid interface. As compared with existing AMR-lattice Boltzmann models, the present method avoids both spatial and temporal interpolations of density distribution functions so that converged solutions on different AMR meshes and uniform grids can be obtained. The proposed method has been successfully validated by simulating a static bubble immersed in another fluid, a falling droplet, instabilities of two-layered fluids, a bubble rising in a box, and a droplet splashing on a thin film with large density ratios and high Reynolds numbers. Good agreement with the theoretical solution, the uniform-grid result, and/or the published data has been achieved. Numerical results also show its effectiveness in saving computational time and virtual memory as compared with computations on uniform meshes.

  15. Reference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate BoilingReference Computational Meshing Strategy for Computational Fluid Dynamics Simulation of Departure from Nucleate Boiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pointer, William David

    The objective of this effort is to establish a strategy and process for generation of suitable computational mesh for computational fluid dynamics simulations of departure from nucleate boiling in a 5 by 5 fuel rod assembly held in place by PWR mixing vane spacer grids. This mesh generation process will support ongoing efforts to develop, demonstrate and validate advanced multi-phase computational fluid dynamics methods that enable more robust identification of dryout conditions and DNB occurrence.Building upon prior efforts and experience, multiple computational meshes were developed using the native mesh generation capabilities of the commercial CFD code STAR-CCM+. These meshes weremore » used to simulate two test cases from the Westinghouse 5 by 5 rod bundle facility. The sensitivity of predicted quantities of interest to the mesh resolution was then established using two evaluation methods, the Grid Convergence Index method and the Least Squares method. This evaluation suggests that the Least Squares method can reliably establish the uncertainty associated with local parameters such as vector velocity components at a point in the domain or surface averaged quantities such as outlet velocity magnitude. However, neither method is suitable for characterization of uncertainty in global extrema such as peak fuel surface temperature, primarily because such parameters are not necessarily associated with a fixed point in space. This shortcoming is significant because the current generation algorithm for identification of DNB event conditions relies on identification of such global extrema. Ongoing efforts to identify DNB based on local surface conditions will address this challenge« less

  16. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation

    PubMed Central

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime

    2017-01-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change. PMID:29027022

  17. An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation.

    PubMed

    Zhang, Xiaoyan; Kim, Daeseung; Shen, Shunyao; Yuan, Peng; Liu, Siting; Tang, Zhen; Zhang, Guangming; Zhou, Xiaobo; Gateno, Jaime; Liebschner, Michael A K; Xia, James J

    2018-04-01

    Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians' need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.

  18. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  19. Unstructured mesh methods for CFD

    NASA Technical Reports Server (NTRS)

    Peraire, J.; Morgan, K.; Peiro, J.

    1990-01-01

    Mesh generation methods for Computational Fluid Dynamics (CFD) are outlined. Geometric modeling is discussed. An advancing front method is described. Flow past a two engine Falcon aeroplane is studied. An algorithm and associated data structure called the alternating digital tree, which efficiently solves the geometric searching problem is described. The computation of an initial approximation to the steady state solution of a given poblem is described. Mesh generation for transient flows is described.

  20. Analysis and computation of a least-squares method for consistent mesh tying

    DOE PAGES

    Day, David; Bochev, Pavel

    2007-07-10

    We report in the finite element method, a standard approach to mesh tying is to apply Lagrange multipliers. If the interface is curved, however, discretization generally leads to adjoining surfaces that do not coincide spatially. Straightforward Lagrange multiplier methods lead to discrete formulations failing a first-order patch test [T.A. Laursen, M.W. Heinstein, Consistent mesh-tying methods for topologically distinct discretized surfaces in non-linear solid mechanics, Internat. J. Numer. Methods Eng. 57 (2003) 1197–1242]. This paper presents a theoretical and computational study of a least-squares method for mesh tying [P. Bochev, D.M. Day, A least-squares method for consistent mesh tying, Internat. J.more » Numer. Anal. Modeling 4 (2007) 342–352], applied to the partial differential equation -∇ 2φ+αφ=f. We prove optimal convergence rates for domains represented as overlapping subdomains and show that the least-squares method passes a patch test of the order of the finite element space by construction. To apply the method to subdomain configurations with gaps and overlaps we use interface perturbations to eliminate the gaps. Finally, theoretical error estimates are illustrated by numerical experiments.« less

  1. Cell Adhesion Minimization by a Novel Mesh Culture Method Mechanically Directs Trophoblast Differentiation and Self-Assembly Organization of Human Pluripotent Stem Cells.

    PubMed

    Okeyo, Kennedy Omondi; Kurosawa, Osamu; Yamazaki, Satoshi; Oana, Hidehiro; Kotera, Hidetoshi; Nakauchi, Hiromitsu; Washizu, Masao

    2015-10-01

    Mechanical methods for inducing differentiation and directing lineage specification will be instrumental in the application of pluripotent stem cells. Here, we demonstrate that minimization of cell-substrate adhesion can initiate and direct the differentiation of human pluripotent stem cells (hiPSCs) into cyst-forming trophoblast lineage cells (TLCs) without stimulation with cytokines or small molecules. To precisely control cell-substrate adhesion area, we developed a novel culture method where cells are cultured on microstructured mesh sheets suspended in a culture medium such that cells on mesh are completely out of contact with the culture dish. We used microfabricated mesh sheets that consisted of open meshes (100∼200 μm in pitch) with narrow mesh strands (3-5 μm in width) to provide support for initial cell attachment and growth. We demonstrate that minimization of cell adhesion area achieved by this culture method can trigger a sequence of morphogenetic transformations that begin with individual hiPSCs attached on the mesh strands proliferating to form cell sheets by self-assembly organization and ultimately differentiating after 10-15 days of mesh culture to generate spherical cysts that secreted human chorionic gonadotropin (hCG) hormone and expressed caudal-related homeobox 2 factor (CDX2), a specific marker of trophoblast lineage. Thus, this study demonstrates a simple and direct mechanical approach to induce trophoblast differentiation and generate cysts for application in the study of early human embryogenesis and drug development and screening.

  2. Application of the discrete generalized multigroup method to ultra-fine energy mesh in infinite medium calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gibson, N. A.; Forget, B.

    2012-07-01

    The Discrete Generalized Multigroup (DGM) method uses discrete Legendre orthogonal polynomials to expand the energy dependence of the multigroup neutron transport equation. This allows a solution on a fine energy mesh to be approximated for a cost comparable to a solution on a coarse energy mesh. The DGM method is applied to an ultra-fine energy mesh (14,767 groups) to avoid using self-shielding methodologies without introducing the cost usually associated with such energy discretization. Results show DGM to converge to the reference ultra-fine solution after a small number of recondensation steps for multiple infinite medium compositions. (authors)

  3. Features of the photometry of the superposition of coherent vector electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Sakhnovskyj, Mykhajlo Yu.; Tymochko, Bogdan M.; Rudeichuk, Volodymyr M.

    2018-01-01

    In the paper we propose a general approach to the calculation of the forming the intensity and polarization fields of the superposition of arbitrary coherent vector beams at points of a given reference plane. The method of measuring photometric parameters of a field, formed in the neighborhood of an arbitrary point of the plane of analysis by minimizing the values of irradiance in the vicinity of a given point (method of zero-amplitude at a given point), which is achieved by superimposing on it the reference wave with the controlled values of intensity, polarization state, phase, and angle of incidence, is proposed.

  4. Impact of chemical plant start-up emissions on ambient ozone concentration

    NASA Astrophysics Data System (ADS)

    Ge, Sijie; Wang, Sujing; Xu, Qiang; Ho, Thomas

    2017-09-01

    Flare emissions, especially start-up flare emissions, during chemical plant operations generate large amounts of ozone precursors that may cause highly localized and transient ground-level ozone increment. Such an adverse ozone impact could be aggravated by the synergies of multiple plant start-ups in an industrial zone. In this paper, a systematic study on ozone increment superposition due to chemical plant start-up emissions has been performed. It employs dynamic flaring profiles of two olefin plants' start-ups to investigate the superposition of the regional 1-hr ozone increment. It also summaries the superposition trend by manipulating the starting time (00:00-10:00) of plant start-up operations and the plant distance (4-32 km). The study indicates that the ozone increment induced by simultaneous start-up emissions from multiple chemical plants generally does not follow the linear superposition of the ozone increment induced by individual plant start-ups. Meanwhile, the trend of such nonlinear superposition related to the temporal (starting time and operating hours of plant start-ups) and spatial (plant distance) factors is also disclosed. This paper couples dynamic simulations of chemical plant start-up operations with air-quality modeling and statistical methods to examine the regional ozone impact. It could be helpful for technical decision support for cost-effective air-quality and industrial flare emission controls.

  5. 3D level set methods for evolving fronts on tetrahedral meshes with adaptive mesh refinement

    DOE PAGES

    Morgan, Nathaniel Ray; Waltz, Jacob I.

    2017-03-02

    The level set method is commonly used to model dynamically evolving fronts and interfaces. In this work, we present new methods for evolving fronts with a specified velocity field or in the surface normal direction on 3D unstructured tetrahedral meshes with adaptive mesh refinement (AMR). The level set field is located at the nodes of the tetrahedral cells and is evolved using new upwind discretizations of Hamilton–Jacobi equations combined with a Runge–Kutta method for temporal integration. The level set field is periodically reinitialized to a signed distance function using an iterative approach with a new upwind gradient. We discuss themore » details of these level set and reinitialization methods. Results from a range of numerical test problems are presented.« less

  6. Protein–DNA Interactions: The Story so Far and a New Method for Prediction

    DOE PAGES

    Jones, Susan; Thornton, Janet M.

    2003-01-01

    This review describes methods for the prediction of DNA binding function, and specifically summarizes a new method using 3D structural templates. The new method features the HTH motif that is found in approximately one-third of DNAbinding protein families. A library of 3D structural templates of HTH motifs was derived from proteins in the PDB. Templates were scanned against complete protein structures and the optimal superposition of a template on a structure calculated. Significance thresholds in terms of a minimum root mean squared deviation (rmsd) of an optimal superposition, and a minimum motif accessible surface area (ASA), have been calculated. Inmore » this way, it is possible to scan the template library against proteins of unknown function to make predictions about DNA-binding functionality.« less

  7. Anisotropic adaptive mesh generation in two dimensions for CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borouchaki, H.; Castro-Diaz, M.J.; George, P.L.

    This paper describes the extension of the classical Delaunay method in the case where anisotropic meshes are required such as in CFD when the modelized physic is strongly directional. The way in which such a mesh generation method can be incorporated in an adaptative loop of CFD as well as the case of multicriterium adaptation are discussed. Several concrete application examples are provided to illustrate the capabilities of the proposed method.

  8. 2D automatic body-fitted structured mesh generation using advancing extraction method

    NASA Astrophysics Data System (ADS)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  9. Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies

    DOE PAGES

    Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan; ...

    2018-02-13

    The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less

  10. Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan

    The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less

  11. New software developments for quality mesh generation and optimization from biomedical imaging data.

    PubMed

    Yu, Zeyun; Wang, Jun; Gao, Zhanheng; Xu, Ming; Hoshijima, Masahiko

    2014-01-01

    In this paper we present a new software toolkit for generating and optimizing surface and volumetric meshes from three-dimensional (3D) biomedical imaging data, targeted at image-based finite element analysis of some biomedical activities in a single material domain. Our toolkit includes a series of geometric processing algorithms including surface re-meshing and quality-guaranteed tetrahedral mesh generation and optimization. All methods described have been encapsulated into a user-friendly graphical interface for easy manipulation and informative visualization of biomedical images and mesh models. Numerous examples are presented to demonstrate the effectiveness and efficiency of the described methods and toolkit. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. [Skeleton extractions and applications].

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quadros, William Roshan

    2010-05-01

    This paper focuses on the extraction of skeletons of CAD models and its applications in finite element (FE) mesh generation. The term 'skeleton of a CAD model' can be visualized as analogous to the 'skeleton of a human body'. The skeletal representations covered in this paper include medial axis transform (MAT), Voronoi diagram (VD), chordal axis transform (CAT), mid surface, digital skeletons, and disconnected skeletons. In the literature, the properties of a skeleton have been utilized in developing various algorithms for extracting skeletons. Three main approaches include: (1) the bisection method where the skeleton exists at equidistant from at leastmore » two points on boundary, (2) the grassfire propagation method in which the skeleton exists where the opposing fronts meet, and (3) the duality method where the skeleton is a dual of the object. In the last decade, the author has applied different skeletal representations in all-quad meshing, hex meshing, mid-surface meshing, mesh size function generation, defeaturing, and decomposition. A brief discussion on the related work from other researchers in the area of tri meshing, tet meshing, and anisotropic meshing is also included. This paper concludes by summarizing the strengths and weaknesses of the skeleton-based approaches in solving various geometry-centered problems in FE mesh generation. The skeletons have proved to be a great shape abstraction tool in analyzing the geometric complexity of CAD models as they are symmetric, simpler (reduced dimension), and provide local thickness information. However, skeletons generally require some cleanup, and stability and sensitivity of the skeletons should be controlled during extraction. Also, selecting a suitable application-specific skeleton and a computationally efficient method of extraction is critical.« less

  13. Assigning categorical information to Japanese medical terms using MeSH and MEDLINE.

    PubMed

    Onogi, Yuzo

    2007-01-01

    This paper reports on the assigning of MeSH (Medical Subject Headings) categories to Japanese terms in an English-Japanese dictionary using the titles and abstracts of articles indexed in MEDLINE. In a previous study, 30,000 of 80,000 terms in the dictionary were mapped to MeSH terms by normalized comparison. It was reasoned that if the remaining dictionary terms appeared in MEDLINE-indexed articles that are indexed using MeSH terms, then relevancies between the dictionary terms and MeSH terms could be calculated, and thus MeSH categories assigned. This study compares two approaches for calculating the weight matrix. One is the TF*IDF method and the other uses the inner product of two weight matrices. About 20,000 additional dictionary terms were identified in MEDLINE-indexed articles published between 2000 and 2004. The precision and recall of these algorithms were evaluated separately for MeSH terms and non-MeSH terms. Unfortunately, the precision and recall of the algorithms was not good, but this method will help with manual assignment of MeSH categories to dictionary terms.

  14. Bioprosthetic Mesh in Abdominal Wall Reconstruction

    PubMed Central

    Baumann, Donald P.; Butler, Charles E.

    2012-01-01

    Mesh materials have undergone a considerable evolution over the last several decades. There has been enhancement of biomechanical properties, improvement in manufacturing processes, and development of antiadhesive laminate synthetic meshes. The evolution of bioprosthetic mesh materials has markedly changed our indications and methods for complex abdominal wall reconstruction. The authors review the optimal properties of bioprosthetic mesh materials, their evolution over time, and their indications for use. The techniques to optimize outcomes are described using bioprosthetic mesh for complex abdominal wall reconstruction. Bioprosthetic mesh materials clearly have certain advantages over other implantable mesh materials in select indications. Appropriate patient selection and surgical technique are critical to the successful use of bioprosthetic materials for abdominal wall repair. PMID:23372454

  15. Numerical analysis method for linear induction machines.

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1972-01-01

    A numerical analysis method has been developed for linear induction machines such as liquid metal MHD pumps and generators and linear motors. Arbitrary phase currents or voltages can be specified and the moving conductor can have arbitrary velocity and conductivity variations from point to point. The moving conductor is divided into a mesh and coefficients are calculated for the voltage induced at each mesh point by unit current at every other mesh point. Combining the coefficients with the mesh resistances yields a set of simultaneous equations which are solved for the unknown currents.

  16. Research Trend Visualization by MeSH Terms from PubMed.

    PubMed

    Yang, Heyoung; Lee, Hyuck Jai

    2018-05-30

    Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.

  17. Evaluation on Bending Properties of Biomaterial GUM Metal Meshed Plates for Bone Graft Applications

    NASA Astrophysics Data System (ADS)

    Suzuki, Hiromichi; He, Jianmei

    2017-11-01

    There are three bone graft methods for bone defects caused by diseases such as cancer and accident injuries: Autogenous bone grafts, Allografts and Artificial bone grafts. In this study, meshed GUM Metal plates with lower elasticity, high strength and high biocompatibility are introduced to solve the over stiffness & weight problems of ready-used metal implants. Basic mesh shapes are designed and applied to GUM Metal plates using 3D CAD modeling tools. Bending properties of prototype meshed GUM Metal plates are evaluated experimentally and analytically. Meshed plate specimens with 180°, 120° and 60° axis-symmetrical types were fabricated for 3-point bending tests. The pseudo bending elastic moduli of meshed plate specimens obtained from 3-point bending test are ranged from 4.22 GPa to 16.07 GPa, within the elasticity range of natural cortical bones from 2.0 GPa to 30.0 GPa. Analytical approach method is validated by comparison with experimental and analytical results for evaluation on bending property of meshed plates.

  18. Methods to control ectomycorrhizal colonization: effectiveness of chemical and physical barriers.

    PubMed

    Teste, François P; Karst, Justine; Jones, Melanie D; Simard, Suzanne W; Durall, Daniel M

    2006-12-01

    We conducted greenhouse experiments using Douglas-fir (Pseudotsuga menziesii var. glauca) seedlings where chemical methods (fungicides) were used to prevent ectomycorrhizal colonization of single seedlings or physical methods (mesh barriers) were used to prevent formation of mycorrhizal connections between neighboring seedlings. These methods were chosen for their ease of application in the field. We applied the fungicides, Topas (nonspecific) and Senator (ascomycete specific), separately and in combination at different concentrations and application frequencies to seedlings grown in unsterilized forest soils. Additionally, we assessed the ability of hyphae to penetrate mesh barriers of various pore sizes (0.2, 1, 20, and 500 microm) to form mycorrhizas on roots of neighboring seedlings. Ectomycorrhizal colonization was reduced by approximately 55% with the application of Topas at 0.5 g l(-1). Meshes with pore sizes of 0.2 and 1 microm were effective in preventing the formation of mycorrhizas via hyphal growth across the mesh barriers. Hence, meshes in this range of pore sizes could also be used to prevent the formation of common mycorrhizal networks in the field. Depending on the ecological question of interest, Topas or the employment of mesh with pore sizes <1 microm are suitable for restricting mycorrhization in the field.

  19. High-Fidelity Geometric Modeling and Mesh Generation for Mechanics Characterization of Polycrystalline Materials

    DTIC Science & Technology

    2014-10-26

    From the parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow...field-based method [7, 12] to generate adaptive and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline ...parameterization results, we extract adaptive and anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based

  20. High-Fidelity Geometric Modeling and Mesh Generation for Mechanics Characterization of Polycrystalline Materials

    DTIC Science & Technology

    2015-01-07

    and anisotropic quadrilateral meshes, which can be used as the control mesh for high-order T- spline surface modeling. Archival publications (published...anisotropic T-meshes for the further T- spline surface construction. Finally, a gradient flow-based method is developed to improve the T-mesh quality...shade-off. Halos are bright or dark thin regions around the boundary of the sample. These false edges around the object make many segmentation

  1. Adaptive Meshing Techniques for Viscous Flow Calculations on Mixed Element Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1997-01-01

    An adaptive refinement strategy based on hierarchical element subdivision is formulated and implemented for meshes containing arbitrary mixtures of tetrahendra, hexahendra, prisms and pyramids. Special attention is given to keeping memory overheads as low as possible. This procedure is coupled with an algebraic multigrid flow solver which operates on mixed-element meshes. Inviscid flows as well as viscous flows are computed an adaptively refined tetrahedral, hexahedral, and hybrid meshes. The efficiency of the method is demonstrated by generating an adapted hexahedral mesh containing 3 million vertices on a relatively inexpensive workstation.

  2. Numerical form-finding method for large mesh reflectors with elastic rim trusses

    NASA Astrophysics Data System (ADS)

    Yang, Dongwu; Zhang, Yiqun; Li, Peng; Du, Jingli

    2018-06-01

    Traditional methods for designing a mesh reflector usually treat the rim truss as rigid. Due to large aperture, light weight and high accuracy requirements on spaceborne reflectors, the rim truss deformation is indeed not negligible. In order to design a cable net with asymmetric boundaries for the front and rear nets, a cable-net form-finding method is firstly introduced. Then, the form-finding method is embedded into an iterative approach for designing a mesh reflector considering the elasticity of the supporting rim truss. By iterations on form-findings of the cable-net based on the updated boundary conditions due to the rim truss deformation, a mesh reflector with a fairly uniform tension distribution in its equilibrium state could be finally designed. Applications on offset mesh reflectors with both circular and elliptical rim trusses are illustrated. The numerical results show the effectiveness of the proposed approach and that a circular rim truss is more stable than an elliptical rim truss.

  3. A Novel Coarsening Method for Scalable and Efficient Mesh Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, A; Hysom, D; Gunney, B

    2010-12-02

    In this paper, we propose a novel mesh coarsening method called brick coarsening method. The proposed method can be used in conjunction with any graph partitioners and scales to very large meshes. This method reduces problem space by decomposing the original mesh into fixed-size blocks of nodes called bricks, layered in a similar way to conventional brick laying, and then assigning each node of the original mesh to appropriate brick. Our experiments indicate that the proposed method scales to very large meshes while allowing simple RCB partitioner to produce higher-quality partitions with significantly less edge cuts. Our results further indicatemore » that the proposed brick-coarsening method allows more complicated partitioners like PT-Scotch to scale to very large problem size while still maintaining good partitioning performance with relatively good edge-cut metric. Graph partitioning is an important problem that has many scientific and engineering applications in such areas as VLSI design, scientific computing, and resource management. Given a graph G = (V,E), where V is the set of vertices and E is the set of edges, (k-way) graph partitioning problem is to partition the vertices of the graph (V) into k disjoint groups such that each group contains roughly equal number of vertices and the number of edges connecting vertices in different groups is minimized. Graph partitioning plays a key role in large scientific computing, especially in mesh-based computations, as it is used as a tool to minimize the volume of communication and to ensure well-balanced load across computing nodes. The impact of graph partitioning on the reduction of communication can be easily seen, for example, in different iterative methods to solve a sparse system of linear equation. Here, a graph partitioning technique is applied to the matrix, which is basically a graph in which each edge is a non-zero entry in the matrix, to allocate groups of vertices to processors in such a way that many of matrix-vector multiplication can be performed locally on each processor and hence to minimize communication. Furthermore, a good graph partitioning scheme ensures the equal amount of computation performed on each processor. Graph partitioning is a well known NP-complete problem, and thus the most commonly used graph partitioning algorithms employ some forms of heuristics. These algorithms vary in terms of their complexity, partition generation time, and the quality of partitions, and they tend to trade off these factors. A significant challenge we are currently facing at the Lawrence Livermore National Laboratory is how to partition very large meshes on massive-size distributed memory machines like IBM BlueGene/P, where scalability becomes a big issue. For example, we have found that the ParMetis, a very popular graph partitioning tool, can only scale to 16K processors. An ideal graph partitioning method on such an environment should be fast and scale to very large meshes, while producing high quality partitions. This is an extremely challenging task, as to scale to that level, the partitioning algorithm should be simple and be able to produce partitions that minimize inter-processor communications and balance the load imposed on the processors. Our goals in this work are two-fold: (1) To develop a new scalable graph partitioning method with good load balancing and communication reduction capability. (2) To study the performance of the proposed partitioning method on very large parallel machines using actual data sets and compare the performance to that of existing methods. The proposed method achieves the desired scalability by reducing the mesh size. For this, it coarsens an input mesh into a smaller size mesh by coalescing the vertices and edges of the original mesh into a set of mega-vertices and mega-edges. A new coarsening method called brick algorithm is developed in this research. In the brick algorithm, the zones in a given mesh are first grouped into fixed size blocks called bricks. These brick are then laid in a way similar to conventional brick laying technique, which reduces the number of neighboring blocks each block needs to communicate. Contributions of this research are as follows: (1) We have developed a novel method that scales to a really large problem size while producing high quality mesh partitions; (2) We measured the performance and scalability of the proposed method on a machine of massive size using a set of actual large complex data sets, where we have scaled to a mesh with 110 million zones using our method. To the best of our knowledge, this is the largest complex mesh that a partitioning method is successfully applied to; and (3) We have shown that proposed method can reduce the number of edge cuts by as much as 65%.« less

  4. Automatic Mesh Generation of Hybrid Mesh on Valves in Multiple Positions in Feedline Systems

    NASA Technical Reports Server (NTRS)

    Ross, Douglass H.; Ito, Yasushi; Dorothy, Fredric W.; Shih, Alan M.; Peugeot, John

    2010-01-01

    Fluid flow simulations through a valve often require evaluation of the valve in multiple opening positions. A mesh has to be generated for the valve for each position and compounding. The problem is the fact that the valve is typically part of a larger feedline system. In this paper, we propose to develop a system to create meshes for feedline systems with parametrically controlled valve openings. Herein we outline two approaches to generate the meshes for a valve in a feedline system at multiple positions. There are two issues that must be addressed. The first is the creation of the mesh on the valve for multiple positions. The second is the generation of the mesh for the total feedline system including the valve. For generation of the mesh on the valve, we will describe the use of topology matching and mesh generation parameter transfer. For generation of the total feedline system, we will describe two solutions that we have implemented. In both cases the valve is treated as a component in the feedline system. In the first method the geometry of the valve in the feedline system is replaced with a valve at a different opening position. Geometry is created to connect the valve to the feedline system. Then topology for the valve is created and the portion of the topology for the valve is topology matched to the standard valve in a different position. The mesh generation parameters are transferred and then the volume mesh for the whole feedline system is generated. The second method enables the user to generate the volume mesh on the valve in multiple open positions external to the feedline system, to insert it into the volume mesh of the feedline system, and to reduce the amount of computer time required for mesh generation because only two small volume meshes connecting the valve to the feedline mesh need to be updated.

  5. Comparison of updated Lagrangian FEM with arbitrary Lagrangian Eulerian method for 3D thermo-mechanical extrusion of a tube profile

    NASA Astrophysics Data System (ADS)

    Kronsteiner, J.; Horwatitsch, D.; Zeman, K.

    2017-10-01

    Thermo-mechanical numerical modelling and simulation of extrusion processes faces several serious challenges. Large plastic deformations in combination with a strong coupling of thermal with mechanical effects leads to a high numerical demand for the solution as well as for the handling of mesh distortions. The two numerical methods presented in this paper also reflect two different ways to deal with mesh distortions. Lagrangian Finite Element Methods (FEM) tackle distorted elements by building a new mesh (called re-meshing) whereas Arbitrary Lagrangian Eulerian (ALE) methods use an "advection" step to remap the solution from the distorted to the undistorted mesh. Another difference between conventional Lagrangian and ALE methods is the separate treatment of material and mesh in ALE, allowing the definition of individual velocity fields. In theory, an ALE formulation contains the Eulerian formulation as a subset to the Lagrangian description of the material. The investigations presented in this paper were dealing with the direct extrusion of a tube profile using EN-AW 6082 aluminum alloy and a comparison of experimental with Lagrangian and ALE results. The numerical simulations cover the billet upsetting and last until one third of the billet length is extruded. A good qualitative correlation of experimental and numerical results could be found, however, major differences between Lagrangian and ALE methods concerning thermo-mechanical coupling lead to deviations in the thermal results.

  6. Superposition of polarized waves at layered media: theoretical modeling and measurement

    NASA Astrophysics Data System (ADS)

    Finkele, Rolf; Wanielik, Gerd

    1997-12-01

    The detection of ice layers on road surfaces is a crucial requirement for a system that is designed to warn vehicle drivers of hazardous road conditions. In the millimeter wave regime at 76 GHz the dielectric constant of ice and conventional road surface materials (i.e. asphalt, concrete) is found to be nearly similar. Thus, if the layer of ice is very thin and thus is of the same shape of roughness as the underlying road surface it cannot be securely detected using conventional algorithmic approaches. The method introduced in this paper extents and applies the theoretical work of Pancharatnam on the superposition of polarized waves. The projection of the Stokes vectors onto the Poincare sphere traces a circle due to the variation of the thickness of the ice layer. The paper presents a method that utilizes the concept of wave superposition to detect this trace even if it is corrupted by stochastic variation due to rough surface scattering. Measurement results taken under real traffic conditions prove the validity of the proposed algorithms. Classification results are presented and the results discussed.

  7. Electromagnetic forward modelling for realistic Earth models using unstructured tetrahedral meshes and a meshfree approach

    NASA Astrophysics Data System (ADS)

    Farquharson, C.; Long, J.; Lu, X.; Lelievre, P. G.

    2017-12-01

    Real-life geology is complex, and so, even when allowing for the diffusive, low resolution nature of geophysical electromagnetic methods, we need Earth models that can accurately represent this complexity when modelling and inverting electromagnetic data. This is particularly the case for the scales, detail and conductivity contrasts involved in mineral and hydrocarbon exploration and development, but also for the larger scale of lithospheric studies. Unstructured tetrahedral meshes provide a flexible means of discretizing a general, arbitrary Earth model. This is important when wanting to integrate a geophysical Earth model with a geological Earth model parameterized in terms of surfaces. Finite-element and finite-volume methods can be derived for computing the electric and magnetic fields in a model parameterized using an unstructured tetrahedral mesh. A number of such variants have been proposed and have proven successful. However, the efficiency and accuracy of these methods can be affected by the "quality" of the tetrahedral discretization, that is, how many of the tetrahedral cells in the mesh are long, narrow and pointy. This is particularly the case if one wants to use an iterative technique to solve the resulting linear system of equations. One approach to deal with this issue is to develop sophisticated model and mesh building and manipulation capabilities in order to ensure that any mesh built from geological information is of sufficient quality for the electromagnetic modelling. Another approach is to investigate other methods of synthesizing the electromagnetic fields. One such example is a "meshfree" approach in which the electromagnetic fields are synthesized using a mesh that is distinct from the mesh used to parameterized the Earth model. There are then two meshes, one describing the Earth model and one used for the numerical mathematics of computing the fields. This means that there are no longer any quality requirements on the model mesh, which makes the process of building a geophysical Earth model from a geological model much simpler. In this presentation we will explore the issues that arise when working with realistic Earth models and when synthesizing geophysical electromagnetic data for them. We briefly consider meshfree methods as a possible means of alleviating some of these issues.

  8. The application of the mesh-free method in the numerical simulations of the higher-order continuum structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou, E-mail: yuzhousun@126.com; Chen, Gensheng; Li, Dongxia

    2016-06-08

    This paper attempts to study the application of mesh-free method in the numerical simulations of the higher-order continuum structures. A high-order bending beam considers the effect of the third-order derivative of deflections, and can be viewed as a one-dimensional higher-order continuum structure. The moving least-squares method is used to construct the shape function with the high-order continuum property, the curvature and the third-order derivative of deflections are directly interpolated with nodal variables and the second- and third-order derivative of the shape function, and the mesh-free computational scheme is establish for beams. The coupled stress theory is introduced to describe themore » special constitutive response of the layered rock mass in which the bending effect of thin layer is considered. The strain and the curvature are directly interpolated with the nodal variables, and the mesh-free method is established for the layered rock mass. The good computational efficiency is achieved based on the developed mesh-free method, and some key issues are discussed.« less

  9. Application of closed-form solutions to a mesh point field in silicon solar cells

    NASA Technical Reports Server (NTRS)

    Lamorte, M. F.

    1985-01-01

    A computer simulation method is discussed that provides for equivalent simulation accuracy, but that exhibits significantly lower CPU running time per bias point compared to other techniques. This new method is applied to a mesh point field as is customary in numerical integration (NI) techniques. The assumption of a linear approximation for the dependent variable, which is typically used in the finite difference and finite element NI methods, is not required. Instead, the set of device transport equations is applied to, and the closed-form solutions obtained for, each mesh point. The mesh point field is generated so that the coefficients in the set of transport equations exhibit small changes between adjacent mesh points. Application of this method to high-efficiency silicon solar cells is described; and the method by which Auger recombination, ambipolar considerations, built-in and induced electric fields, bandgap narrowing, carrier confinement, and carrier diffusivities are treated. Bandgap narrowing has been investigated using Fermi-Dirac statistics, and these results show that bandgap narrowing is more pronounced and that it is temperature-dependent in contrast to the results based on Boltzmann statistics.

  10. Efficient Unstructured Cartesian/Immersed-Boundary Method with Local Mesh Refinement to Simulate Flows in Complex 3D Geometries

    NASA Astrophysics Data System (ADS)

    de Zelicourt, Diane; Ge, Liang; Sotiropoulos, Fotis; Yoganathan, Ajit

    2008-11-01

    Image-guided computational fluid dynamics has recently gained attention as a tool for predicting the outcome of different surgical scenarios. Cartesian Immersed-Boundary methods constitute an attractive option to tackle the complexity of real-life anatomies. However, when such methods are applied to the branching, multi-vessel configurations typically encountered in cardiovascular anatomies the majority of the grid nodes of the background Cartesian mesh end up lying outside the computational domain, increasing the memory and computational overhead without enhancing the numerical resolution in the region of interest. To remedy this situation, the method presented here superimposes local mesh refinement onto an unstructured Cartesian grid formulation. A baseline unstructured Cartesian mesh is generated by eliminating all nodes that reside in the exterior of the flow domain from the grid structure, and is locally refined in the vicinity of the immersed-boundary. The potential of the method is demonstrated by carrying out systematic mesh refinement studies for internal flow problems ranging in complexity from a 90 deg pipe bend to an actual, patient-specific anatomy reconstructed from magnetic resonance.

  11. Preparation and biocompatibility evaluation of polypropylene mesh coated with electrospinning membrane for pelvic defects repair.

    PubMed

    Lu, Yao; Fu, Shaoju; Zhou, Shuanglin; Chen, Ge; Zhu, Chaoting; Li, Nannan; Ma, Ying

    2018-05-01

    Composite mesh with different materials composition could compensate for the drawbacks brought by single component mesh. Coating a membrane layer on the surface of macroporous mesh is a common method for preparing composite medical mesh. Electrospinning and dipping methods were introduced to form the two kinds of membrane-coated PP meshes (electro-mesh and dip-mesh); several properties were measured based on subcutaneous implantation model in rat. The results revealed that continuous tissue ingrowth could be observed for electro-mesh only with evidences of strength increase (electro-mesh: 0 week - 13.1 ± 0.88 N, 2 week - 16.87 ± 1.39 N, 4 week - 22.04 ± 2.05 N) and thickness increase (electro-mesh: 0 week - 0.437 ± 0.023 mm, 2 week - 0.488 ± 0.025 mm, 4 week - 0.576 ± 0.028 mm). However, no tissues were observed for dip-mesh in the first 2 weeks, both on macroscopic level and microscopic level, proved by strength data (dip-mesh: 0 week - 13.36 ± 1.06 N, 2 week - 13.4 ± 1.33 N, 4 week - 18.61 ± 1.89 N) and thickness data (dip-mesh: 0 week - 0.439 ± 0.018 mm, 2 week - 0.439 ± 0.019 mm, 4 week - 0.502 ± 0.032 mm). Electro-mesh had larger surface area decrease (10.74 ± 1.22%) than that of dip-mesh (2.78 ± 0.52%). The adhesion level of electro-mesh (medium adhesion) was also higher than that of dip-mesh (mild adhesion). Even if showing differences in several properties, both meshes were similar under histological observation, with the ability to support fresh tissues ingrowth. Considering operation environment, electro-mesh seems more suitable than dip-mesh with a rapid tissue growing, medium adhesion rate for repairing pelvic floor defects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. High-fidelity meshes from tissue samples for diffusion MRI simulations.

    PubMed

    Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C

    2010-01-01

    This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.

  13. Automatic generation of endocardial surface meshes with 1-to-1 correspondence from cine-MR images

    NASA Astrophysics Data System (ADS)

    Su, Yi; Teo, S.-K.; Lim, C. W.; Zhong, L.; Tan, R. S.

    2015-03-01

    In this work, we develop an automatic method to generate a set of 4D 1-to-1 corresponding surface meshes of the left ventricle (LV) endocardial surface which are motion registered over the whole cardiac cycle. These 4D meshes have 1- to-1 point correspondence over the entire set, and is suitable for advanced computational processing, such as shape analysis, motion analysis and finite element modelling. The inputs to the method are the set of 3D LV endocardial surface meshes of the different frames/phases of the cardiac cycle. Each of these meshes is reconstructed independently from border-delineated MR images and they have no correspondence in terms of number of vertices/points and mesh connectivity. To generate point correspondence, the first frame of the LV mesh model is used as a template to be matched to the shape of the meshes in the subsequent phases. There are two stages in the mesh correspondence process: (1) a coarse matching phase, and (2) a fine matching phase. In the coarse matching phase, an initial rough matching between the template and the target is achieved using a radial basis function (RBF) morphing process. The feature points on the template and target meshes are automatically identified using a 16-segment nomenclature of the LV. In the fine matching phase, a progressive mesh projection process is used to conform the rough estimate to fit the exact shape of the target. In addition, an optimization-based smoothing process is used to achieve superior mesh quality and continuous point motion.

  14. Root-cause analysis of the better performance of the coarse-mesh finite-difference method for CANDU-type reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, W.

    2012-07-01

    Recent assessment results indicate that the coarse-mesh finite-difference method (FDM) gives consistently smaller percent differences in channel powers than the fine-mesh FDM when compared to the reference MCNP solution for CANDU-type reactors. However, there is an impression that the fine-mesh FDM should always give more accurate results than the coarse-mesh FDM in theory. To answer the question if the better performance of the coarse-mesh FDM for CANDU-type reactors was just a coincidence (cancellation of errors) or caused by the use of heavy water or the use of lattice-homogenized cross sections for the cluster fuel geometry in the diffusion calculation, threemore » benchmark problems were set up with three different fuel lattices: CANDU, HWR and PWR. These benchmark problems were then used to analyze the root cause of the better performance of the coarse-mesh FDM for CANDU-type reactors. The analyses confirm that the better performance of the coarse-mesh FDM for CANDU-type reactors is mainly caused by the use of lattice-homogenized cross sections for the sub-meshes of the cluster fuel geometry in the diffusion calculation. Based on the analyses, it is recommended to use 2 x 2 coarse-mesh FDM to analyze CANDU-type reactors when lattice-homogenized cross sections are used in the core analysis. (authors)« less

  15. Meshable: searching PubMed abstracts by utilizing MeSH and MeSH-derived topical terms.

    PubMed

    Kim, Sun; Yeganova, Lana; Wilbur, W John

    2016-10-01

    Medical Subject Headings (MeSH(®)) is a controlled vocabulary for indexing and searching biomedical literature. MeSH terms and subheadings are organized in a hierarchical structure and are used to indicate the topics of an article. Biologists can use either MeSH terms as queries or the MeSH interface provided in PubMed(®) for searching PubMed abstracts. However, these are rarely used, and there is no convenient way to link standardized MeSH terms to user queries. Here, we introduce a web interface which allows users to enter queries to find MeSH terms closely related to the queries. Our method relies on co-occurrence of text words and MeSH terms to find keywords that are related to each MeSH term. A query is then matched with the keywords for MeSH terms, and candidate MeSH terms are ranked based on their relatedness to the query. The experimental results show that our method achieves the best performance among several term extraction approaches in terms of topic coherence. Moreover, the interface can be effectively used to find full names of abbreviations and to disambiguate user queries. https://www.ncbi.nlm.nih.gov/IRET/MESHABLE/ CONTACT: sun.kim@nih.gov Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  16. Staggered Mesh Ewald: An extension of the Smooth Particle-Mesh Ewald method adding great versatility

    PubMed Central

    Cerutti, David S.; Duke, Robert E.; Darden, Thomas A.; Lybrand, Terry P.

    2009-01-01

    We draw on an old technique for improving the accuracy of mesh-based field calculations to extend the popular Smooth Particle Mesh Ewald (SPME) algorithm as the Staggered Mesh Ewald (StME) algorithm. StME improves the accuracy of computed forces by up to 1.2 orders of magnitude and also reduces the drift in system momentum inherent in the SPME method by averaging the results of two separate reciprocal space calculations. StME can use charge mesh spacings roughly 1.5× larger than SPME to obtain comparable levels of accuracy; the one mesh in an SPME calculation can therefore be replaced with two separate meshes, each less than one third of the original size. Coarsening the charge mesh can be balanced with reductions in the direct space cutoff to optimize performance: the efficiency of StME rivals or exceeds that of SPME calculations with similarly optimized parameters. StME may also offer advantages for parallel molecular dynamics simulations because it permits the use of coarser meshes without requiring higher orders of charge interpolation and also because the two reciprocal space calculations can be run independently if that is most suitable for the machine architecture. We are planning other improvements to the standard SPME algorithm, and anticipate that StME will work synergistically will all of them to dramatically improve the efficiency and parallel scaling of molecular simulations. PMID:20174456

  17. Iterative methods for elliptic finite element equations on general meshes

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.; Choudhury, Shenaz

    1986-01-01

    Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.

  18. Systematic Evaluation of Wajima Superposition (Steady-State Concentration to Mean Residence Time) in the Estimation of Human Intravenous Pharmacokinetic Profile.

    PubMed

    Lombardo, Franco; Berellini, Giuliano; Labonte, Laura R; Liang, Guiqing; Kim, Sean

    2016-03-01

    We present a systematic evaluation of the Wajima superpositioning method to estimate the human intravenous (i.v.) pharmacokinetic (PK) profile based on a set of 54 marketed drugs with diverse structure and range of physicochemical properties. We illustrate the use of average of "best methods" for the prediction of clearance (CL) and volume of distribution at steady state (VDss) as described in our earlier work (Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):178-191; Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):167-177). These methods provided much more accurate prediction of human PK parameters, yielding 88% and 70% of the prediction within 2-fold error for VDss and CL, respectively. The prediction of human i.v. profile using Wajima superpositioning of rat, dog, and monkey time-concentration profiles was tested against the observed human i.v. PK using fold error statistics. The results showed that 63% of the compounds yielded a geometric mean of fold error below 2-fold, and an additional 19% yielded a geometric mean of fold error between 2- and 3-fold, leaving only 18% of the compounds with a relatively poor prediction. Our results showed that good superposition was observed in any case, demonstrating the predictive value of the Wajima approach, and that the cause of poor prediction of human i.v. profile was mainly due to the poorly predicted CL value, while VDss prediction had a minor impact on the accuracy of human i.v. profile prediction. Copyright © 2016. Published by Elsevier Inc.

  19. A numerical fragment basis approach to SCF calculations.

    NASA Astrophysics Data System (ADS)

    Hinde, Robert J.

    1997-11-01

    The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.

  20. Calculation of Water Entry Problem for Free-falling Bodies Using a Developed Cartesian Cut Cell Mesh

    NASA Astrophysics Data System (ADS)

    Wenhua, Wang; Yanying, Wang

    2010-05-01

    This paper describes the development of free surface capturing method on Cartesian cut cell mesh to water entry problem for free-falling bodies with body-fluid interaction. The incompressible Euler equations for a variable density fluid system are presented as governing equations and the free surface is treated as a contact discontinuity by using free surface capturing method. In order to be convenient for dealing with the problem with moving body boundary, the Cartesian cut cell technique is adopted for generating the boundary-fitted mesh around body edge by cutting solid regions out of a background Cartesian mesh. Based on this mesh system, governing equations are discretized by finite volume method, and at each cell edge inviscid flux is evaluated by means of Roe's approximate Riemann solver. Furthermore, for unsteady calculation in time domain, a time accurate solution is achieved by a dual time-stepping technique with artificial compressibility method. For the body-fluid interaction, the projection method of momentum equations and exact Riemann solution are applied in the calculation of fluid pressure on the solid boundary. Finally, the method is validated by test case of water entry for free-falling bodies.

  1. Free-Lagrange methods for compressible hydrodynamics in two space dimensions

    NASA Astrophysics Data System (ADS)

    Crowley, W. E.

    1985-03-01

    Since 1970 a research and development program in Free-Lagrange methods has been active at Livermore. The initial steps were taken with incompressible flows for simplicity. Since then the effort has been concentrated on compressible flows with shocks in two space dimensions and time. In general, the line integral method has been used to evaluate derivatives and the artificial viscosity method has been used to deal with shocks. Basically, two Free-Lagrange formulations for compressible flows in two space dimensions and time have been tested and both will be described. In method one, all prognostic quantities were node centered and staggered in time. The artificial viscosity was zone centered. One mesh reconnection philosphy was that the mesh should be optimized so that nearest neighbors were connected together. Another was that vertex angles should tend toward equality. In method one, all mesh elements were triangles. In method two, both quadrilateral and triangular mesh elements are permitted. The mesh variables are staggered in space and time as suggested originally by Richtmyer and von Neumann. The mesh reconnection strategy is entirely different in method two. In contrast to the global strategy of nearest neighbors, we now have a more local strategy that reconnects in order to keep the integration time step above a user chosen threshold. An additional strategy reconnects in the vicinity of large relative fluid motions. Mesh reconnection consists of two parts: (1) the tools that permits nodes to be merged and quads to be split into triangles etc. and; (2) the strategy that dictates how and when to use the tools. Both tools and strategies change with time in a continuing effort to expand the capabilities of the method. New ideas are continually being tried and evaluated.

  2. Superposition-free comparison and clustering of antibody binding sites: implications for the prediction of the nature of their antigen

    PubMed Central

    Di Rienzo, Lorenzo; Milanetti, Edoardo; Lepore, Rosalba; Olimpieri, Pier Paolo; Tramontano, Anna

    2017-01-01

    We describe here a superposition free method for comparing the surfaces of antibody binding sites based on the Zernike moments and show that they can be used to quickly compare and cluster sets of antibodies. The clusters provide information about the nature of the bound antigen that, when combined with a method for predicting the number of direct antibody antigen contacts, allows the discrimination between protein and non-protein binding antibodies with an accuracy of 76%. This is of relevance in several aspects of antibody science, for example to select the framework to be used for a combinatorial antibody library. PMID:28338016

  3. An annular superposition integral for axisymmetric radiators.

    PubMed

    Kelly, James F; McGough, Robert J

    2007-02-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a "smooth piston" function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity.

  4. An adaptive moving mesh method for two-dimensional ideal magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Han, Jianqiang; Tang, Huazhong

    2007-01-01

    This paper presents an adaptive moving mesh algorithm for two-dimensional (2D) ideal magnetohydrodynamics (MHD) that utilizes a staggered constrained transport technique to keep the magnetic field divergence-free. The algorithm consists of two independent parts: MHD evolution and mesh-redistribution. The first part is a high-resolution, divergence-free, shock-capturing scheme on a fixed quadrangular mesh, while the second part is an iterative procedure. In each iteration, mesh points are first redistributed, and then a conservative-interpolation formula is used to calculate the remapped cell-averages of the mass, momentum, and total energy on the resulting new mesh; the magnetic potential is remapped to the new mesh in a non-conservative way and is reconstructed to give a divergence-free magnetic field on the new mesh. Several numerical examples are given to demonstrate that the proposed method can achieve high numerical accuracy, track and resolve strong shock waves in ideal MHD problems, and preserve divergence-free property of the magnetic field. Numerical examples include the smooth Alfvén wave problem, 2D and 2.5D shock tube problems, two rotor problems, the stringent blast problem, and the cloud-shock interaction problem.

  5. On the application of hybrid meshes in hydraulic machinery CFD simulations

    NASA Astrophysics Data System (ADS)

    Schlipf, M.; Tismer, A.; Riedelbauch, S.

    2016-11-01

    The application of two different hybrid mesh types for the simulation of a Francis runner for automated optimization processes without user input is investigated. Those mesh types are applied to simplified test cases such as flow around NACA airfoils to identify the special mesh resolution effects with reduced complexity, like rotating cascade flows, as they occur in a turbomachine runner channel. The analysis includes the application of those different meshes on the geometries by keeping defined quality criteria and exploring the influences on the simulation results. All results are compared with reference values gained by simulations with blockstructured hexahedron meshes and the same numerical scheme. This avoids additional inaccuracies caused by further numerical and experimental measurement methods. The results show that a simulation with hybrid meshes built up by a blockstructured domain with hexahedrons around the blade in combination with a tetrahedral far field in the channel is sufficient to get results which are almost as accurate as the results gained by the reference simulation. Furthermore this method is robust enough for automated processes without user input and enables comparable meshes in size, distribution and quality for different similar geometries as occurring in optimization processes.

  6. Numerical simulation of immiscible viscous fingering using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Adam, A.; Salinas, P.; Percival, J. R.; Pavlidis, D.; Pain, C.; Muggeridge, A. H.; Jackson, M.

    2015-12-01

    Displacement of one fluid by another in porous media occurs in various settings including hydrocarbon recovery, CO2 storage and water purification. When the invading fluid is of lower viscosity than the resident fluid, the displacement front is subject to a Saffman-Taylor instability and is unstable to transverse perturbations. These instabilities can grow, leading to fingering of the invading fluid. Numerical simulation of viscous fingering is challenging. The physics is controlled by a complex interplay of viscous and diffusive forces and it is necessary to ensure physical diffusion dominates numerical diffusion to obtain converged solutions. This typically requires the use of high mesh resolution and high order numerical methods. This is computationally expensive. We demonstrate here the use of a novel control volume - finite element (CVFE) method along with dynamic unstructured mesh adaptivity to simulate viscous fingering with higher accuracy and lower computational cost than conventional methods. Our CVFE method employs a discontinuous representation for both pressure and velocity, allowing the use of smaller control volumes (CVs). This yields higher resolution of the saturation field which is represented CV-wise. Moreover, dynamic mesh adaptivity allows high mesh resolution to be employed where it is required to resolve the fingers and lower resolution elsewhere. We use our results to re-examine the existing criteria that have been proposed to govern the onset of instability.Mesh adaptivity requires the mapping of data from one mesh to another. Conventional methods such as consistent interpolation do not readily generalise to discontinuous fields and are non-conservative. We further contribute a general framework for interpolation of CV fields by Galerkin projection. The method is conservative, higher order and yields improved results, particularly with higher order or discontinuous elements where existing approaches are often excessively diffusive.

  7. The transfer function method for gear system dynamics applied to conventional and minimum excitation gearing designs

    NASA Technical Reports Server (NTRS)

    Mark, W. D.

    1982-01-01

    A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.

  8. Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling

    NASA Astrophysics Data System (ADS)

    Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo

    2017-06-01

    Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.

  9. Resource Theory of Superposition

    NASA Astrophysics Data System (ADS)

    Theurer, T.; Killoran, N.; Egloff, D.; Plenio, M. B.

    2017-12-01

    The superposition principle lies at the heart of many nonclassical properties of quantum mechanics. Motivated by this, we introduce a rigorous resource theory framework for the quantification of superposition of a finite number of linear independent states. This theory is a generalization of resource theories of coherence. We determine the general structure of operations which do not create superposition, find a fundamental connection to unambiguous state discrimination, and propose several quantitative superposition measures. Using this theory, we show that trace decreasing operations can be completed for free which, when specialized to the theory of coherence, resolves an outstanding open question and is used to address the free probabilistic transformation between pure states. Finally, we prove that linearly independent superposition is a necessary and sufficient condition for the faithful creation of entanglement in discrete settings, establishing a strong structural connection between our theory of superposition and entanglement theory.

  10. The impact of hydrophobic hernia mesh coating by omega fatty acid on atraumatic fibrin sealant fixation.

    PubMed

    Gruber-Blum, S; Brand, J; Keibl, C; Redl, H; Fortelny, R H; May, C; Petter-Puchner, A H

    2015-08-01

    Fibrin sealant (FS) is a safe and efficient fixation method in open intraperitoneal hernia repair. While favourable results have been achieved with hydrophilic meshes, hydrophobic (such as Omega fatty acid coated) meshes (OFM) have not been specifically assessed so far. Atrium C-qur lite(®) mesh was tested in rats in models of open onlay and intraperitoneal hernia repair. 44 meshes (2 × 2 cm) were implanted in 30 male Sprague-Dawley rats in open (n = 2 meshes per animal) and intraperitoneal technique (IPOM; n = 1 mesh per animal). Animals were randomised to four groups: onlay and IPOM sutured vs. sealed. Follow-up was 6 weeks, sutured groups serving as controls. Evaluation criteria were mesh dislocation, adhesions and foreign body reaction. FS provided a reliable fixation in onlay technique, whereas OFM meshes dislocated in the IPOM position when sealed only. FS mesh fixation was safe with OFM meshes in open onlay repair. Intraperitoneal placement of hydrophobic meshes requires additional fixation and cannot be achieved with FS alone.

  11. Mesh Generation via Local Bisection Refinement of Triangulated Grids

    DTIC Science & Technology

    2015-06-01

    Science and Technology Organisation DSTO–TR–3095 ABSTRACT This report provides a comprehensive implementation of an unstructured mesh generation method...and Technology Organisation 506 Lorimer St, Fishermans Bend, Victoria 3207, Australia Telephone: 1300 333 362 Facsimile: (03) 9626 7999 c© Commonwealth...their behaviour is critically linked to Maubach’s method and the data structures N and T . The top- level mesh refinement algorithm is also presented

  12. On the Mixing of Single and Opposed Rows of Jets With a Confined Crossflow

    NASA Technical Reports Server (NTRS)

    Holdeman, James D.; Clisset, James R.; Moder, Jeffrey P.; Lear, William E.

    2006-01-01

    The primary objectives of this study were 1) to demonstrate that contour plots could be made using the data interface in the NASA GRC jet-in-crossflow (JIC) spreadsheet, and 2) to investigate the suitability of using superposition for the case of opposed rows of jets with their centerlines in-line. The current report is similar to NASA/TM-2005-213137 but the "basic" effects of a confined JIC that are shown in profile plots there are shown as contour plots in this report, and profile plots for opposed rows of aligned jets are presented here using both symmetry and superposition models. Although superposition was found to be suitable for most cases of opposed rows of jets with jet centerlines in-line, the calculation procedure in the JIC spreadsheet was not changed and it still uses the symmetry method for this case, as did all previous publications of the NASA empirical model.

  13. Bayesian segmentation of atrium wall using globally-optimal graph cuts on 3D meshes.

    PubMed

    Veni, Gopalkrishna; Fu, Zhisong; Awate, Suyash P; Whitaker, Ross T

    2013-01-01

    Efficient segmentation of the left atrium (LA) wall from delayed enhancement MRI is challenging due to inconsistent contrast, combined with noise, and high variation in atrial shape and size. We present a surface-detection method that is capable of extracting the atrial wall by computing an optimal a-posteriori estimate. This estimation is done on a set of nested meshes, constructed from an ensemble of segmented training images, and graph cuts on an associated multi-column, proper-ordered graph. The graph/mesh is a part of a template/model that has an associated set of learned intensity features. When this mesh is overlaid onto a test image, it produces a set of costs which lead to an optimal segmentation. The 3D mesh has an associated weighted, directed multi-column graph with edges that encode smoothness and inter-surface penalties. Unlike previous graph-cut methods that impose hard constraints on the surface properties, the proposed method follows from a Bayesian formulation resulting in soft penalties on spatial variation of the cuts through the mesh. The novelty of this method also lies in the construction of proper-ordered graphs on complex shapes for choosing among distinct classes of base shapes for automatic LA segmentation. We evaluate the proposed segmentation framework on simulated and clinical cardiac MRI.

  14. Driving ferromagnetic resonance frequency of FeCoB/PZN-PT multiferroic heterostructures to Ku-band via two-step climbing: composition gradient sputtering and magnetoelectric coupling

    PubMed Central

    Li, Shandong; Xue, Qian; Duh, Jenq-Gong; Du, Honglei; Xu, Jie; Wan, Yong; Li, Qiang; Lü, Yueguang

    2014-01-01

    RF/microwave soft magnetic films (SMFs) are key materials for miniaturization and multifunctionalization of monolithic microwave integrated circuits (MMICs) and their components, which demand that the SMFs should have higher self-bias ferromagnetic resonance frequency fFMR, and can be fabricated in an IC compatible process. However, self-biased metallic SMFs working at X-band or higher frequency were rarely reported, even though there are urgent demands. In this paper, we report an IC compatible process with two-step superposition to prepare SMFs, where the FeCoB SMFs were deposited on (011) lead zinc niobate–lead titanate substrates using a composition gradient sputtering method. As a result, a giant magnetic anisotropy field of 1498 Oe, 1–2 orders of magnitude larger than that by conventional magnetic annealing method, and an ultrahigh fFMR of up to 12.96 GHz reaching Ku-band, were obtained at zero magnetic bias field in the as-deposited films. These ultrahigh microwave performances can be attributed to the superposition of two effects: uniaxial stress induced by composition gradient and magnetoelectric coupling. This two-step superposition method paves a way for SMFs to surpass X-band by two-step or multi-step, where a variety of magnetic anisotropy field enhancing methods can be cumulated together to get higher ferromagnetic resonance frequency. PMID:25491374

  15. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing

    PubMed Central

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-01-01

    Motivation: Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. Methods: We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the ‘learning to rank’ framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. Results: DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. Availability and Implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307646

  16. Cycle/Cocycle Oblique Projections on Oriented Graphs

    NASA Astrophysics Data System (ADS)

    Polettini, Matteo

    2015-01-01

    It is well known that the edge vector space of an oriented graph can be decomposed in terms of cycles and cocycles (alias cuts, or bonds), and that a basis for the cycle and the cocycle spaces can be generated by adding and removing edges to an arbitrarily chosen spanning tree. In this paper, we show that the edge vector space can also be decomposed in terms of cycles and the generating edges of cocycles (called cochords), or of cocycles and the generating edges of cycles (called chords). From this observation follows a construction in terms of oblique complementary projection operators. We employ this algebraic construction to prove several properties of unweighted Kirchhoff-Symanzik matrices, encoding the mutual superposition between cycles and cocycles. In particular, we prove that dual matrices of planar graphs have the same spectrum (up to multiplicities). We briefly comment on how this construction provides a refined formalization of Kirchhoff's mesh analysis of electrical circuits, which has lately been applied to generic thermodynamic networks.

  17. Adaptive mesh refinement techniques for the immersed interface method applied to flow problems

    PubMed Central

    Li, Zhilin; Song, Peng

    2013-01-01

    In this paper, we develop an adaptive mesh refinement strategy of the Immersed Interface Method for flow problems with a moving interface. The work is built on the AMR method developed for two-dimensional elliptic interface problems in the paper [12] (CiCP, 12(2012), 515–527). The interface is captured by the zero level set of a Lipschitz continuous function φ(x, y, t). Our adaptive mesh refinement is built within a small band of |φ(x, y, t)| ≤ δ with finer Cartesian meshes. The AMR-IIM is validated for Stokes and Navier-Stokes equations with exact solutions, moving interfaces driven by the surface tension, and classical bubble deformation problems. A new simple area preserving strategy is also proposed in this paper for the level set method. PMID:23794763

  18. Diffuse interface simulation of bubble rising process: a comparison of adaptive mesh refinement and arbitrary lagrange-euler methods

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Cai, Jiejin; Li, Qiong; Yin, Huaqiang; Yang, Xingtuan

    2018-06-01

    Gas-liquid two phase flow exists in several industrial processes and light-water reactors (LWRs). A diffuse interface based finite element method with two different mesh generation methods namely, the Adaptive Mesh Refinement (AMR) and the Arbitrary Lagrange Euler (ALE) methods is used to model the shape and velocity changes in a rising bubble. Moreover, the calculating speed and mesh generation strategies of AMR and ALE are contrasted. The simulation results agree with the Bhagat's experiments, indicating that both mesh generation methods can simulate the characteristics of bubble accurately. We concluded that: the small bubble rises as elliptical with oscillation, whereas a larger bubble (11 mm > d > 7 mm) rises with a morphology between the elliptical and cap type with a larger oscillation. When the bubble is large (d > 11 mm), it rises up as a cap type, and the amplitude becomes smaller. Moreover, it takes longer to achieve the stable shape from the ellipsoid to the spherical cap type with the increase of the bubble diameter. The results also show that for smaller diameter case, the ALE method uses fewer grids and has a faster calculation speed, but the AMR method can solve the case of a large geometry deformation efficiently.

  19. A Domain-Decomposed Multilevel Method for Adaptively Refined Cartesian Grids with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Adomavicius, G.

    2000-01-01

    Preliminary verification and validation of an efficient Euler solver for adaptively refined Cartesian meshes with embedded boundaries is presented. The parallel, multilevel method makes use of a new on-the-fly parallel domain decomposition strategy based upon the use of space-filling curves, and automatically generates a sequence of coarse meshes for processing by the multigrid smoother. The coarse mesh generation algorithm produces grids which completely cover the computational domain at every level in the mesh hierarchy. A series of examples on realistically complex three-dimensional configurations demonstrate that this new coarsening algorithm reliably achieves mesh coarsening ratios in excess of 7 on adaptively refined meshes. Numerical investigations of the scheme's local truncation error demonstrate an achieved order of accuracy between 1.82 and 1.88. Convergence results for the multigrid scheme are presented for both subsonic and transonic test cases and demonstrate W-cycle multigrid convergence rates between 0.84 and 0.94. Preliminary parallel scalability tests on both simple wing and complex complete aircraft geometries shows a computational speedup of 52 on 64 processors using the run-time mesh partitioner.

  20. Quality Tetrahedral Mesh Smoothing via Boundary-Optimized Delaunay Triangulation

    PubMed Central

    Gao, Zhanheng; Yu, Zeyun; Holst, Michael

    2012-01-01

    Despite its great success in improving the quality of a tetrahedral mesh, the original optimal Delaunay triangulation (ODT) is designed to move only inner vertices and thus cannot handle input meshes containing “bad” triangles on boundaries. In the current work, we present an integrated approach called boundary-optimized Delaunay triangulation (B-ODT) to smooth (improve) a tetrahedral mesh. In our method, both inner and boundary vertices are repositioned by analytically minimizing the error between a paraboloid function and its piecewise linear interpolation over the neighborhood of each vertex. In addition to the guaranteed volume-preserving property, the proposed algorithm can be readily adapted to preserve sharp features in the original mesh. A number of experiments are included to demonstrate the performance of our method. PMID:23144522

  1. Coherent Leinard-Wiechert fields produced by FELs (free-electron laser). Technical report, 14 January 1981-13 January 1982

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elias, L.R.

    1981-12-01

    Results are presented of a three-dimensional numerical analysis of the radiation fields produced in a free-electron laser. The method used here to obtain the spatial and temporal behavior of the radiated fields is based on the coherent superposition of the radiated fields is based on the coherent superposition of the exact Lienard-Wiechert fields produced by each electron in the beam. Interference effects are responsible for the narrow angular radiation patterns obtained and for the high degree of monochromaticity of the radiated fields.

  2. Reversible switch between underwater superaerophilicity and superaerophobicity on the superhydrophobic nanowire-haired mesh for controlling underwater bubble wettability

    NASA Astrophysics Data System (ADS)

    Shan, Chao; Yong, Jiale; Yang, Qing; Chen, Feng; Huo, Jinglan; Zhuang, Jian; Jiang, Zhuangde; Hou, Xun

    2018-04-01

    Controlling the underwater bubble wettability on a solid surface is of great research significance. In this letter, a simple method to achieve reversible switch between underwater superaerophilicity and underwater superaerophobicity on a superhydrophobic nanowire-haired mesh by alternately vacuumizing treatment in water and drying in air is reported. Such reversible switch endows the as-prepared mesh with many functional applications in controlling bubble's behavior on a solid substrate. The underwater superaerophilic mesh is able to absorb/capture bubbles in water, while the superaerophobic mesh has great anti-bubble ability. The reversible switch between underwater superaerophilicity and superaerophobicity can selectively allow bubbles to go through the resultant mesh; that is, bubbles can pass through the underwater superaerophilic mesh while are fully intercepted by the underwater superaerophobic mesh in a water medium. We believe these meshes will have important applications in removing or capturing underwater bubbles/gas.

  3. Moving mesh finite element simulation for phase-field modeling of brittle fracture and convergence of Newton's iteration

    NASA Astrophysics Data System (ADS)

    Zhang, Fei; Huang, Weizhang; Li, Xianping; Zhang, Shicheng

    2018-03-01

    A moving mesh finite element method is studied for the numerical solution of a phase-field model for brittle fracture. The moving mesh partial differential equation approach is employed to dynamically track crack propagation. Meanwhile, the decomposition of the strain tensor into tensile and compressive components is essential for the success of the phase-field modeling of brittle fracture but results in a non-smooth elastic energy and stronger nonlinearity in the governing equation. This makes the governing equation much more difficult to solve and, in particular, Newton's iteration often fails to converge. Three regularization methods are proposed to smooth out the decomposition of the strain tensor. Numerical examples of fracture propagation under quasi-static load demonstrate that all of the methods can effectively improve the convergence of Newton's iteration for relatively small values of the regularization parameter but without compromising the accuracy of the numerical solution. They also show that the moving mesh finite element method is able to adaptively concentrate the mesh elements around propagating cracks and handle multiple and complex crack systems.

  4. Auto-adaptive finite element meshes

    NASA Technical Reports Server (NTRS)

    Richter, Roland; Leyland, Penelope

    1995-01-01

    Accurate capturing of discontinuities within compressible flow computations is achieved by coupling a suitable solver with an automatic adaptive mesh algorithm for unstructured triangular meshes. The mesh adaptation procedures developed rely on non-hierarchical dynamical local refinement/derefinement techniques, which hence enable structural optimization as well as geometrical optimization. The methods described are applied for a number of the ICASE test cases are particularly interesting for unsteady flow simulations.

  5. A 3D front tracking method on a CPU/GPU system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bo, Wurigen; Grove, John

    2011-01-21

    We describe the method to port a sequential 3D interface tracking code to a GPU with CUDA. The interface is represented as a triangular mesh. Interface geometry properties and point propagation are performed on a GPU. Interface mesh adaptation is performed on a CPU. The convergence of the method is assessed from the test problems with given velocity fields. Performance results show overall speedups from 11 to 14 for the test problems under mesh refinement. We also briefly describe our ongoing work to couple the interface tracking method with a hydro solver.

  6. A new class of accurate, mesh-free hydrodynamic simulation methods

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2015-06-01

    We present two new Lagrangian methods for hydrodynamics, in a systematic comparison with moving-mesh, smoothed particle hydrodynamics (SPH), and stationary (non-moving) grid methods. The new methods are designed to simultaneously capture advantages of both SPH and grid-based/adaptive mesh refinement (AMR) schemes. They are based on a kernel discretization of the volume coupled to a high-order matrix gradient estimator and a Riemann solver acting over the volume `overlap'. We implement and test a parallel, second-order version of the method with self-gravity and cosmological integration, in the code GIZMO:1 this maintains exact mass, energy and momentum conservation; exhibits superior angular momentum conservation compared to all other methods we study; does not require `artificial diffusion' terms; and allows the fluid elements to move with the flow, so resolution is automatically adaptive. We consider a large suite of test problems, and find that on all problems the new methods appear competitive with moving-mesh schemes, with some advantages (particularly in angular momentum conservation), at the cost of enhanced noise. The new methods have many advantages versus SPH: proper convergence, good capturing of fluid-mixing instabilities, dramatically reduced `particle noise' and numerical viscosity, more accurate sub-sonic flow evolution, and sharp shock-capturing. Advantages versus non-moving meshes include: automatic adaptivity, dramatically reduced advection errors and numerical overmixing, velocity-independent errors, accurate coupling to gravity, good angular momentum conservation and elimination of `grid alignment' effects. We can, for example, follow hundreds of orbits of gaseous discs, while AMR and SPH methods break down in a few orbits. However, fixed meshes minimize `grid noise'. These differences are important for a range of astrophysical problems.

  7. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence

    PubMed Central

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-01-01

    Motivation: Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. Methods: We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using ‘learning to rank’. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. Results: MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. Availability and implementation: The software is available upon request. Contact: zhusf@fudan.edu.cn PMID:26072501

  8. A comparative study of an ABC and an artificial absorber for truncating finite element meshes

    NASA Technical Reports Server (NTRS)

    Oezdemir, T.; Volakis, John L.

    1993-01-01

    The type of mesh termination used in the context of finite element formulations plays a major role on the efficiency and accuracy of the field solution. The performance of an absorbing boundary condition (ABC) and an artificial absorber (a new concept) for terminating the finite element mesh was evaluated. This analysis is done in connection with the problem of scattering by a finite slot array in a thick ground plane. The two approximate mesh truncation schemes are compared with the exact finite element-boundary integral (FEM-BI) method in terms of accuracy and efficiency. It is demonstrated that both approximate truncation schemes yield reasonably accurate results even when the mesh is extended only 0.3 wavelengths away from the array aperture. However, the artificial absorber termination method leads to a substantially more efficient solution. Moreover, it is shown that the FEM-BI method remains quite competitive with the FEM-artificial absorber method when the FFT is used for computing the matrix-vector products in the iterative solution algorithm. These conclusions are indeed surprising and of major importance in electromagnetic simulations based on the finite element method.

  9. Meshes optimized for discrete exterior calculus (DEC).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousley, Sarah C.; Deakin, Michael; Knupp, Patrick

    We study the optimization of an energy function used by the meshing community to measure and improve mesh quality. This energy is non-traditional because it is dependent on both the primal triangulation and its dual Voronoi (power) diagram. The energy is a measure of the mesh's quality for usage in Discrete Exterior Calculus (DEC), a method for numerically solving PDEs. In DEC, the PDE domain is triangulated and this mesh is used to obtain discrete approximations of the continuous operators in the PDE. The energy of a mesh gives an upper bound on the error of the discrete diagonal approximationmore » of the Hodge star operator. In practice, one begins with an initial mesh and then makes adjustments to produce a mesh of lower energy. However, we have discovered several shortcomings in directly optimizing this energy, e.g. its non-convexity, and we show that the search for an optimized mesh may lead to mesh inversion (malformed triangles). We propose a new energy function to address some of these issues.« less

  10. Macroscopicity of quantum superpositions on a one-parameter unitary path in Hilbert space

    NASA Astrophysics Data System (ADS)

    Volkoff, T. J.; Whaley, K. B.

    2014-12-01

    We analyze quantum states formed as superpositions of an initial pure product state and its image under local unitary evolution, using two measurement-based measures of superposition size: one based on the optimal quantum binary distinguishability of the branches of the superposition and another based on the ratio of the maximal quantum Fisher information of the superposition to that of its branches, i.e., the relative metrological usefulness of the superposition. A general formula for the effective sizes of these states according to the branch-distinguishability measure is obtained and applied to superposition states of N quantum harmonic oscillators composed of Gaussian branches. Considering optimal distinguishability of pure states on a time-evolution path leads naturally to a notion of distinguishability time that generalizes the well-known orthogonalization times of Mandelstam and Tamm and Margolus and Levitin. We further show that the distinguishability time provides a compact operational expression for the superposition size measure based on the relative quantum Fisher information. By restricting the maximization procedure in the definition of this measure to an appropriate algebra of observables, we show that the superposition size of, e.g., NOON states and hierarchical cat states, can scale linearly with the number of elementary particles comprising the superposition state, implying precision scaling inversely with the total number of photons when these states are employed as probes in quantum parameter estimation of a 1-local Hamiltonian in this algebra.

  11. Improved ALE mesh velocities for complex flows

    DOE PAGES

    Bakosi, Jozsef; Waltz, Jacob I.; Morgan, Nathaniel Ray

    2017-05-31

    A key choice in the development of arbitrary Lagrangian-Eulerian solution algorithms is how to move the computational mesh. The most common approaches are smoothing and relaxation techniques, or to compute a mesh velocity field that produces smooth mesh displacements. We present a method in which the mesh velocity is specified by the irrotational component of the fluid velocity as computed from a Helmholtz decomposition, and excess compression of mesh cells is treated through a noniterative, local spring-force model. This approach allows distinct and separate control over rotational and translational modes. In conclusion, the utility of the new mesh motion algorithmmore » is demonstrated on a number of 3D test problems, including problems that involve both shocks and significant amounts of vorticity.« less

  12. Superhydrophobic hierarchical structure carbon mesh films for oil/water separation application

    NASA Astrophysics Data System (ADS)

    Lu, Zhaoxia; Huang, Xing; Wang, Lisheng

    2017-08-01

    In this study, we showed that a superoleophobic mesh with the self-cleaning ability could be readily prepared by a facile spray-coating method on stainless steel mesh. Poly(methyl methacrylate) was employed to provide a stable strength between carbon nanotubes and steel mesh surface. The effect of opening size of these steel meshes on surface wetting has been investigated. The dynamics of liquid droplets was investigated as well. The as-prepared meshes exhibited both superhydrophobicity and superoleophilicity and could effectively separate water from the oil and water mixture. The present study contributes to the development of oil and water separation materials for marine industrial application.

  13. Automatic partitioning of unstructured meshes for the parallel solution of problems in computational mechanics

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Lesoinne, Michel

    1993-01-01

    Most of the recently proposed computational methods for solving partial differential equations on multiprocessor architectures stem from the 'divide and conquer' paradigm and involve some form of domain decomposition. For those methods which also require grids of points or patches of elements, it is often necessary to explicitly partition the underlying mesh, especially when working with local memory parallel processors. In this paper, a family of cost-effective algorithms for the automatic partitioning of arbitrary two- and three-dimensional finite element and finite difference meshes is presented and discussed in view of a domain decomposed solution procedure and parallel processing. The influence of the algorithmic aspects of a solution method (implicit/explicit computations), and the architectural specifics of a multiprocessor (SIMD/MIMD, startup/transmission time), on the design of a mesh partitioning algorithm are discussed. The impact of the partitioning strategy on load balancing, operation count, operator conditioning, rate of convergence and processor mapping is also addressed. Finally, the proposed mesh decomposition algorithms are demonstrated with realistic examples of finite element, finite volume, and finite difference meshes associated with the parallel solution of solid and fluid mechanics problems on the iPSC/2 and iPSC/860 multiprocessors.

  14. Design and simulation of origami structures with smooth folds

    PubMed Central

    Peraza Hernandez, E. A.; Lagoudas, D. C.

    2017-01-01

    Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds. This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh. PMID:28484322

  15. Biomechanics Simulations Using Cubic Hermite Meshes with Extraordinary Nodes for Isogeometric Cardiac Modeling

    PubMed Central

    Gonzales, Matthew J.; Sturgeon, Gregory; Segars, W. Paul; McCulloch, Andrew D.

    2016-01-01

    Cubic Hermite hexahedral finite element meshes have some well-known advantages over linear tetrahedral finite element meshes in biomechanical and anatomic modeling using isogeometric analysis. These include faster convergence rates as well as the ability to easily model rule-based anatomic features such as cardiac fiber directions. However, it is not possible to create closed complex objects with only regular nodes; these objects require the presence of extraordinary nodes (nodes with 3 or >= 5 adjacent elements in 2D) in the mesh. The presence of extraordinary nodes requires new constraints on the derivatives of adjacent elements to maintain continuity. We have developed a new method that uses an ensemble coordinate frame at the nodes and a local-to-global mapping to maintain continuity. In this paper, we make use of this mapping to create cubic Hermite models of the human ventricles and a four-chamber heart. We also extend the methods to the finite element equations to perform biomechanics simulations using these meshes. The new methods are validated using simple test models and applied to anatomically accurate ventricular meshes with valve annuli to simulate complete cardiac cycle simulations. PMID:27182096

  16. Design and simulation of origami structures with smooth folds.

    PubMed

    Peraza Hernandez, E A; Hartl, D J; Lagoudas, D C

    2017-04-01

    Origami has enabled new approaches to the fabrication and functionality of multiple structures. Current methods for origami design are restricted to the idealization of folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures of non-negligible fold thickness or maximum curvature at the folds restricted by material limitations. For such structures, folds are not properly represented as creases but rather as bent regions of higher-order geometric continuity. Such fold regions of arbitrary order of continuity are termed as smooth folds . This paper presents a method for solving the following origami design problem: given a goal shape represented as a polygonal mesh (termed as the goal mesh ), find the geometry of a single planar sheet, its pattern of smooth folds, and the history of folding motion allowing the sheet to approximate the goal mesh. The parametrization of the planar sheet and the constraints that allow for a valid pattern of smooth folds are presented. The method is tested against various goal meshes having diverse geometries. The results show that every determined sheet approximates its corresponding goal mesh in a known folded configuration having fold angles obtained from the geometry of the goal mesh.

  17. Triode carbon nanotube field emission display using barrier rib structure and manufacturing method thereof

    DOEpatents

    Han, In-taek; Kim, Jong-min

    2003-01-01

    A triode carbon nanotube field emission display (FED) using a barrier rib structure and a manufacturing method thereof are provided. In a triode carbon nanotube FED employing barrier ribs, barrier ribs are formed on cathode lines by a screen printing method, a mesh structure is mounted on the barrier ribs, and a spacer is inserted between the barrier ribs through slots of the mesh structure, thereby stably fixing the mesh structure and the spacer within a FED panel due to support by the barrier ribs.

  18. Changes in pelvic organ prolapse mesh mechanical properties following implantation in rats.

    PubMed

    Ulrich, Daniela; Edwards, Sharon L; Alexander, David L J; Rosamilia, Anna; Werkmeister, Jerome A; Gargett, Caroline E; Letouzey, Vincent

    2016-02-01

    Pelvic organ prolapse (POP) is a multifactorial disease that manifests as the herniation of the pelvic organs into the vagina. Surgical methods for prolapse repair involve the use of a synthetic polypropylene mesh. The use of this mesh has led to significantly higher anatomical success rates compared with native tissue repairs, and therefore, despite recent warnings by the Food and Drug Administration regarding the use of vaginal mesh, the number of POP mesh surgeries has increased over the last few years. However, mesh implantation is associated with higher postsurgery complications, including pain and erosion, with higher consecutive rates of reoperation when placed vaginally. Little is known on how the mechanical properties of the implanted mesh itself change in vivo. It is assumed that the mechanical properties of these meshes remain unchanged, with any differences in mechanical properties of the formed mesh-tissue complex attributed to the attached tissue alone. It is likely that any changes in mesh mechanical properties that do occur in vivo will have an impact on the biomechanical properties of the formed mesh-tissue complex. The objective of the study was to assess changes in the multiaxial mechanical properties of synthetic clinical prolapse meshes implanted abdominally for up to 90 days, using a rat model. Another objective of the study was to assess the biomechanical properties of the formed mesh-tissue complex following implantation. Three nondegradable polypropylene clinical synthetic mesh types for prolapse repair (Gynemesh PS, Polyform Lite, and Restorelle) and a partially degradable polypropylene/polyglecaprone mesh (UltraPro) were mechanically assessed before and after implantation (n = 5/ mesh type) in Sprague Dawley rats for 30 (Gynemesh PS, Polyform Lite, and Restorelle) and 90 (UltraPro and Polyform Lite) days. Stiffness and permanent extension following cyclic loading, and breaking load, of the preimplanted mesh types, explanted mesh-tissue complexes, and explanted meshes were assessed using a multi-axial (ball-burst) method. The 4 clinical meshes varied from each other in weight, thickness, porosity, and pore size and showed significant differences in stiffness and breaking load before implantation. Following 30 days of implantation, the mechanical properties of some mesh types altered, with significant decreases in mesh stiffness and breaking load, and increased permanent extension. After 90 days these changes were more obvious, with significant decreases in stiffness and breaking load and increased permanent extension. Similar biomechanical properties of formed mesh-tissue complexes were observed for mesh types of different preimplant stiffness and structure after 90 days implantation. This is the first study to report on intrinsic changes in the mechanical properties of implanted meshes and how these changes have an impact on the estimated tissue contribution of the formed mesh-tissue complex. Decreased mesh stiffness, strength, and increased permanent extension following 90 days of implantation increase the biomechanical contribution of the attached tissue of the formed mesh-tissue complex more than previously thought. This needs to be considered when using meshes for prolapse repair. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  19. Robust laser-structured asymmetrical PTFE mesh for underwater directional transportation and continuous collection of gas bubbles

    NASA Astrophysics Data System (ADS)

    Yin, Kai; Yang, Shuai; Dong, Xinran; Chu, Dongkai; Duan, Ji-An; He, Jun

    2018-06-01

    We report a simple, efficient method to fabricate micro/nanoscale hierarchical structures on one side of polytetrafluoroethylene mesh surfaces, using one-step femtosecond laser direct writing technology. The laser-treated surface exhibits superhydrophobicity in air and superaerophilicity in water, resulting in the mesh possessing the hydrophobic/superhydrophobic asymmetrical property. Bubbles can pass through the mesh from the untreated side to the laser-treated side but cannot pass through the mesh in the opposite direction. The asymmetrical mesh can therefore be designed for the directional transportation and continuous collection of gas bubbles in aqueous environments. Furthermore, the asymmetrical mesh shows excellent stability during corrosion and abrasion tests. These findings may provide an efficient route for fabricating a durable asymmetrical mesh for the directional and continuous transport of gas bubbles.

  20. Medical Subject Headings (MeSH) for indexing and retrieving open-source healthcare data.

    PubMed

    Marc, David T; Khairat, Saif S

    2014-01-01

    The US federal government initiated the Open Government Directive where federal agencies are required to publish high value datasets so that they are available to the public. Data.gov and the community site Healthdata.gov were initiated to disperse such datasets. However, data searches and retrieval for these sites are keyword driven and severely limited in performance. The purpose of this paper is to address the issue of extracting relevant open-source data by proposing a method of adopting the MeSH framework for indexing and data retrieval. A pilot study was conducted to compare the performance of traditional keywords to MeSH terms for retrieving relevant open-source datasets related to "mortality". The MeSH framework resulted in greater sensitivity with comparable specificity to the keyword search. MeSH showed promise as a method for indexing and retrieving data, yet future research should conduct a larger scale evaluation of the performance of the MeSH framework for retrieving relevant open-source healthcare datasets.

  1. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  2. Communication: Two measures of isochronal superposition

    NASA Astrophysics Data System (ADS)

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C.; Niss, Kristine

    2013-09-01

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  3. Communication: Two measures of isochronal superposition.

    PubMed

    Roed, Lisa Anita; Gundermann, Ditte; Dyre, Jeppe C; Niss, Kristine

    2013-09-14

    A liquid obeys isochronal superposition if its dynamics is invariant along the isochrones in the thermodynamic phase diagram (the curves of constant relaxation time). This paper introduces two quantitative measures of isochronal superposition. The measures are used to test the following six liquids for isochronal superposition: 1,2,6 hexanetriol, glycerol, polyphenyl ether, diethyl phthalate, tetramethyl tetraphenyl trisiloxane, and dibutyl phthalate. The latter four van der Waals liquids obey isochronal superposition to a higher degree than the two hydrogen-bonded liquids. This is a prediction of the isomorph theory, and it confirms findings by other groups.

  4. 3D Reconstruction of human bones based on dictionary learning.

    PubMed

    Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin

    2017-11-01

    An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. The transmission of stress to grafted bone inside a titanium mesh cage used in anterior column reconstruction after total spondylectomy: a finite-element analysis.

    PubMed

    Akamaru, Tomoyuki; Kawahara, Norio; Sakamoto, Jiro; Yoshida, Akira; Murakami, Hideki; Hato, Taizo; Awamori, Serina; Oda, Juhachi; Tomita, Katsuro

    2005-12-15

    A finite-element study of posterior alone or anterior/posterior combined instrumentation following total spondylectomy and replacement with a titanium mesh cage used as an anterior strut. To compare the effect of posterior instrumentation versus anterior/posterior instrumentation on transmission of the stress to grafted bone inside a titanium mesh cage following total spondylectomy. The most recent reconstruction techniques following total spondylectomy for malignant spinal tumor include a titanium mesh cage filled with autologous bone as an anterior strut. The need for additional anterior instrumentation with posterior pedicle screws and rods is controversial. Transmission of the mechanical stress to grafted bone inside a titanium mesh cage is important for fusion and remodeling. To our knowledge, there are no published reports comparing the load-sharing properties of the different reconstruction methods following total spondylectomy. A 3-dimensional finite-element model of the reconstructed spine (T10-L4) following total spondylectomy at T12 was constructed. A Harms titanium mesh cage (DePuy Spine, Raynham, MA) was positioned as an anterior replacement, and 3 types of the reconstruction methods were compared: (1) multilevel posterior instrumentation (MPI) (i.e., posterior pedicle screws and rods at T10-L2 without anterior instrumentation); (2) MPI with anterior instrumentation (MPAI) (i.e., MPAI [Kaneda SR; DePuy Spine] at T11-L1); and (3) short posterior and anterior instrumentation (SPAI) (i.e., posterior pedicle screws and rods with anterior instrumentation at T11-L1). The mechanical energy stress distribution exerted inside the titanium mesh cage was evaluated and compared by finite-element analysis for the 3 different reconstruction methods. Simulated forces were applied to give axial compression, flexion, extension, and lateral bending. In flexion mode, the energy stress distribution in MPI was higher than 3.0 x 10 MPa in 73.0% of the total volume inside the titanium mesh cage, while 38.0% in MPAI, and 43.3% in SPAI. In axial compression and extension modes, there were no remarkable differences for each reconstruction method. In left-bending mode, there was little stress energy in the cancellous bone inside the titanium mesh cage in MPAI and SPAI. This experiment shows that from the viewpoint of stress shielding, the reconstruction method, using additional anterior instrumentation with posterior pedicle screws (MPAI and SPAI), stress shields the cancellous bone inside the titanium mesh cage to a higher degree than does the system using posterior pedicle screw fixation alone (MPI). Thus, a reconstruction method with no anterior fixation should be better at allowing stress for remodeling of the bone graft inside the titanium mesh cage.

  6. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  7. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  8. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  9. Improved scatter correction using adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Sun, M.; Star-Lack, J. M.

    2010-11-01

    Accurate scatter correction is required to produce high-quality reconstructions of x-ray cone-beam computed tomography (CBCT) scans. This paper describes new scatter kernel superposition (SKS) algorithms for deconvolving scatter from projection data. The algorithms are designed to improve upon the conventional approach whose accuracy is limited by the use of symmetric kernels that characterize the scatter properties of uniform slabs. To model scatter transport in more realistic objects, nonstationary kernels, whose shapes adapt to local thickness variations in the projection data, are proposed. Two methods are introduced: (1) adaptive scatter kernel superposition (ASKS) requiring spatial domain convolutions and (2) fast adaptive scatter kernel superposition (fASKS) where, through a linearity approximation, convolution is efficiently performed in Fourier space. The conventional SKS algorithm, ASKS, and fASKS, were tested with Monte Carlo simulations and with phantom data acquired on a table-top CBCT system matching the Varian On-Board Imager (OBI). All three models accounted for scatter point-spread broadening due to object thickening, object edge effects, detector scatter properties and an anti-scatter grid. Hounsfield unit (HU) errors in reconstructions of a large pelvis phantom with a measured maximum scatter-to-primary ratio over 200% were reduced from -90 ± 58 HU (mean ± standard deviation) with no scatter correction to 53 ± 82 HU with SKS, to 19 ± 25 HU with fASKS and to 13 ± 21 HU with ASKS. HU accuracies and measured contrast were similarly improved in reconstructions of a body-sized elliptical Catphan phantom. The results show that the adaptive SKS methods offer significant advantages over the conventional scatter deconvolution technique.

  10. Effective size of certain macroscopic quantum superpositions.

    PubMed

    Dür, Wolfgang; Simon, Christoph; Cirac, J Ignacio

    2002-11-18

    Several experiments and experimental proposals for the production of macroscopic superpositions naturally lead to states of the general form /phi(1)>( multiply sign in circle N)+/phi 2 >( multiply sign in circle N), where the number of subsystems N is very large, but the states of the individual subsystems have large overlap, // 2=1-epsilon 2. We propose two different methods for assigning an effective particle number to such states, using ideal Greenberger-Horne-Zeilinger states of the form /0>( multiply sign in circle n)+/1>( multiply sign in circle n) as a standard of comparison. The two methods are based on decoherence and on a distillation protocol, respectively. Both lead to an effective size n of the order of N epsilon 2.

  11. RigFit: a new approach to superimposing ligand molecules.

    PubMed

    Lemmen, C; Hiller, C; Lengauer, T

    1998-09-01

    If structural knowledge of a receptor under consideration is lacking, drug design approaches focus on similarity or dissimilarity analysis of putative ligands. In this context the mutual ligand superposition is of utmost importance. Methods that are rapid enough to facilitate interactive usage, that allow to process sets of conformers and that enable database screening are of special interest here. The ability to superpose molecular fragments instead of entire molecules has proven to be helpful too. The RIGFIT approach meets these requirements and has several additional advantages. In three distinct test applications, we evaluated how closely we can approximate the observed relative orientation for a set of known crystal structures, we employed RIGFIT as a fragment placement procedure, and we performed a fragment-based database screening. The run time of RIGFIT can be traded off against its accuracy. To be competitive in accuracy with another state-of-the-art alignment tool, with which we compare our method explicitly, computing times of about 6 s per superposition on a common day workstation are required. If longer run times can be afforded the accuracy increases significantly. RIGFIT is part of the flexible superposition software FLEXS which can be accessed on the WWW [http:/(/)cartan.gmd.de/FlexS].

  12. Advanced 3D mesh manipulation in stereolithographic files and post-print processing for the manufacturing of patient-specific vascular flow phantoms

    NASA Astrophysics Data System (ADS)

    O'Hara, Ryan P.; Chand, Arpita; Vidiyala, Sowmya; Arechavala, Stacie M.; Mitsouras, Dimitrios; Rudin, Stephen; Ionita, Ciprian N.

    2016-03-01

    Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments, test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry. For post-print surface optimization, we investigated various solutions and methods to remove support material and smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh manipulation methods decrease the phantom design time by 30 - 80% and allow for rapid development of accurate vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods presented in this work could lead to shorter design time for patient specific phantoms and better physiological simulations.

  13. Advanced 3D Mesh Manipulation in Stereolithographic Files and Post-Print Processing for the Manufacturing of Patient-Specific Vascular Flow Phantoms.

    PubMed

    O'Hara, Ryan P; Chand, Arpita; Vidiyala, Sowmya; Arechavala, Stacie M; Mitsouras, Dimitrios; Rudin, Stephen; Ionita, Ciprian N

    2016-02-27

    Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments, test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry. For post-print surface optimization, we investigated various solutions and methods to remove support material and smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh manipulation methods decrease the phantom design time by 30 - 80% and allow for rapid development of accurate vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods presented in this work could lead to shorter design time for patient specific phantoms and better physiological simulations.

  14. Advanced 3D Mesh Manipulation in Stereolithographic Files and Post-Print Processing for the Manufacturing of Patient-Specific Vascular Flow Phantoms

    PubMed Central

    O’Hara, Ryan P.; Chand, Arpita; Vidiyala, Sowmya; Arechavala, Stacie M.; Mitsouras, Dimitrios; Rudin, Stephen; Ionita, Ciprian N.

    2017-01-01

    Complex vascular anatomies can cause the failure of image-guided endovascular procedures. 3D printed patient-specific vascular phantoms provide clinicians and medical device companies the ability to preemptively plan surgical treatments, test the likelihood of device success, and determine potential operative setbacks. This research aims to present advanced mesh manipulation techniques of stereolithographic (STL) files segmented from medical imaging and post-print surface optimization to match physiological vascular flow resistance. For phantom design, we developed three mesh manipulation techniques. The first method allows outlet 3D mesh manipulations to merge superfluous vessels into a single junction, decreasing the number of flow outlets and making it feasible to include smaller vessels. Next we introduced Boolean operations to eliminate the need to manually merge mesh layers and eliminate errors of mesh self-intersections that previously occurred. Finally we optimize support addition to preserve the patient anatomical geometry. For post-print surface optimization, we investigated various solutions and methods to remove support material and smooth the inner vessel surface. Solutions of chloroform, alcohol and sodium hydroxide were used to process various phantoms and hydraulic resistance was measured and compared with values reported in literature. The newly mesh manipulation methods decrease the phantom design time by 30 – 80% and allow for rapid development of accurate vascular models. We have created 3D printed vascular models with vessel diameters less than 0.5 mm. The methods presented in this work could lead to shorter design time for patient specific phantoms and better physiological simulations. PMID:28649165

  15. A third-order moving mesh cell-centered scheme for one-dimensional elastic-plastic flows

    NASA Astrophysics Data System (ADS)

    Cheng, Jun-Bo; Huang, Weizhang; Jiang, Song; Tian, Baolin

    2017-11-01

    A third-order moving mesh cell-centered scheme without the remapping of physical variables is developed for the numerical solution of one-dimensional elastic-plastic flows with the Mie-Grüneisen equation of state, the Wilkins constitutive model, and the von Mises yielding criterion. The scheme combines the Lagrangian method with the MMPDE moving mesh method and adaptively moves the mesh to better resolve shock and other types of waves while preventing the mesh from crossing and tangling. It can be viewed as a direct arbitrarily Lagrangian-Eulerian method but can also be degenerated to a purely Lagrangian scheme. It treats the relative velocity of the fluid with respect to the mesh as constant in time between time steps, which allows high-order approximation of free boundaries. A time dependent scaling is used in the monitor function to avoid possible sudden movement of the mesh points due to the creation or diminishing of shock and rarefaction waves or the steepening of those waves. A two-rarefaction Riemann solver with elastic waves is employed to compute the Godunov values of the density, pressure, velocity, and deviatoric stress at cell interfaces. Numerical results are presented for three examples. The third-order convergence of the scheme and its ability to concentrate mesh points around shock and elastic rarefaction waves are demonstrated. The obtained numerical results are in good agreement with those in literature. The new scheme is also shown to be more accurate in resolving shock and rarefaction waves than an existing third-order cell-centered Lagrangian scheme.

  16. Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yi, Ce

    With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.

  17. Scatter correction for cone-beam computed tomography using self-adaptive scatter kernel superposition

    NASA Astrophysics Data System (ADS)

    Xie, Shi-Peng; Luo, Li-Min

    2012-06-01

    The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.

  18. Multivariate Time Series Decomposition into Oscillation Components.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  19. Dual Formulations of Mixed Finite Element Methods with Applications

    PubMed Central

    Gillette, Andrew; Bajaj, Chandrajit

    2011-01-01

    Mixed finite element methods solve a PDE using two or more variables. The theory of Discrete Exterior Calculus explains why the degrees of freedom associated to the different variables should be stored on both primal and dual domain meshes with a discrete Hodge star used to transfer information between the meshes. We show through analysis and examples that the choice of discrete Hodge star is essential to the numerical stability of the method. Additionally, we define interpolation functions and discrete Hodge stars on dual meshes which can be used to create previously unconsidered mixed methods. Examples from magnetostatics and Darcy flow are examined in detail. PMID:21984841

  20. Introducing a distributed unstructured mesh into gyrokinetic particle-in-cell code, XGC

    NASA Astrophysics Data System (ADS)

    Yoon, Eisung; Shephard, Mark; Seol, E. Seegyoung; Kalyanaraman, Kaushik

    2017-10-01

    XGC has shown good scalability for large leadership supercomputers. The current production version uses a copy of the entire unstructured finite element mesh on every MPI rank. Although an obvious scalability issue if the mesh sizes are to be dramatically increased, the current approach is also not optimal with respect to data locality of particles and mesh information. To address these issues we have initiated the development of a distributed mesh PIC method. This approach directly addresses the base scalability issue with respect to mesh size and, through the use of a mesh entity centric view of the particle mesh relationship, provides opportunities to address data locality needs of many core and GPU supported heterogeneous systems. The parallel mesh PIC capabilities are being built on the Parallel Unstructured Mesh Infrastructure (PUMI). The presentation will first overview the form of mesh distribution used and indicate the structures and functions used to support the mesh, the particles and their interaction. Attention will then focus on the node-level optimizations being carried out to ensure performant operation of all PIC operations on the distributed mesh. Partnership for Edge Physics Simulation (EPSI) Grant No. DE-SC0008449 and Center for Extended Magnetohydrodynamic Modeling (CEMM) Grant No. DE-SC0006618.

  1. Merging for Particle-Mesh Complex Particle Kinetic Modeling of the Multiple Plasma Beams

    NASA Technical Reports Server (NTRS)

    Lipatov, Alexander S.

    2011-01-01

    We suggest a merging procedure for the Particle-Mesh Complex Particle Kinetic (PMCPK) method in case of inter-penetrating flow (multiple plasma beams). We examine the standard particle-in-cell (PIC) and the PMCPK methods in the case of particle acceleration by shock surfing for a wide range of the control numerical parameters. The plasma dynamics is described by a hybrid (particle-ion-fluid-electron) model. Note that one may need a mesh if modeling with the computation of an electromagnetic field. Our calculations use specified, time-independent electromagnetic fields for the shock, rather than self-consistently generated fields. While a particle-mesh method is a well-verified approach, the CPK method seems to be a good approach for multiscale modeling that includes multiple regions with various particle/fluid plasma behavior. However, the CPK method is still in need of a verification for studying the basic plasma phenomena: particle heating and acceleration by collisionless shocks, magnetic field reconnection, beam dynamics, etc.

  2. A finite element method with overlapping meshes for free-boundary axisymmetric plasma equilibria in realistic geometries

    NASA Astrophysics Data System (ADS)

    Heumann, Holger; Rapetti, Francesca

    2017-04-01

    Existing finite element implementations for the computation of free-boundary axisymmetric plasma equilibria approximate the unknown poloidal flux function by standard lowest order continuous finite elements with discontinuous gradients. As a consequence, the location of critical points of the poloidal flux, that are of paramount importance in tokamak engineering, is constrained to nodes of the mesh leading to undesired jumps in transient problems. Moreover, recent numerical results for the self-consistent coupling of equilibrium with resistive diffusion and transport suggest the necessity of higher regularity when approximating the flux map. In this work we propose a mortar element method that employs two overlapping meshes. One mesh with Cartesian quadrilaterals covers the vacuum chamber domain accessible by the plasma and one mesh with triangles discretizes the region outside. The two meshes overlap in a narrow region. This approach gives the flexibility to achieve easily and at low cost higher order regularity for the approximation of the flux function in the domain covered by the plasma, while preserving accurate meshing of the geometric details outside this region. The continuity of the numerical solution in the region of overlap is weakly enforced by a mortar-like mapping.

  3. Adaptive unstructured triangular mesh generation and flow solvers for the Navier-Stokes equations at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Ashford, Gregory A.; Powell, Kenneth G.

    1995-01-01

    A method for generating high quality unstructured triangular grids for high Reynolds number Navier-Stokes calculations about complex geometries is described. Careful attention is paid in the mesh generation process to resolving efficiently the disparate length scales which arise in these flows. First the surface mesh is constructed in a way which ensures that the geometry is faithfully represented. The volume mesh generation then proceeds in two phases thus allowing the viscous and inviscid regions of the flow to be meshed optimally. A solution-adaptive remeshing procedure which allows the mesh to adapt itself to flow features is also described. The procedure for tracking wakes and refinement criteria appropriate for shock detection are described. Although at present it has only been implemented in two dimensions, the grid generation process has been designed with the extension to three dimensions in mind. An implicit, higher-order, upwind method is also presented for computing compressible turbulent flows on these meshes. Two recently developed one-equation turbulence models have been implemented to simulate the effects of the fluid turbulence. Results for flow about a RAE 2822 airfoil and a Douglas three-element airfoil are presented which clearly show the improved resolution obtainable.

  4. Enriching Triangle Mesh Animations with Physically Based Simulation.

    PubMed

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  5. Optimization-based mesh correction with volume and convexity constraints

    DOE PAGES

    D'Elia, Marta; Ridzal, Denis; Peterson, Kara J.; ...

    2016-02-24

    In this study, we consider the problem of finding a mesh such that 1) it is the closest, with respect to a suitable metric, to a given source mesh having the same connectivity, and 2) the volumes of its cells match a set of prescribed positive values that are not necessarily equal to the cell volumes in the source mesh. This volume correction problem arises in important simulation contexts, such as satisfying a discrete geometric conservation law and solving transport equations by incremental remapping or similar semi-Lagrangian transport schemes. In this paper we formulate volume correction as a constrained optimizationmore » problem in which the distance to the source mesh defines an optimization objective, while the prescribed cell volumes, mesh validity and/or cell convexity specify the constraints. We solve this problem numerically using a sequential quadratic programming (SQP) method whose performance scales with the mesh size. To achieve scalable performance we develop a specialized multigrid-based preconditioner for optimality systems that arise in the application of the SQP method to the volume correction problem. Numerical examples illustrate the importance of volume correction, and showcase the accuracy, robustness and scalability of our approach.« less

  6. A New Approach to Parallel Dynamic Partitioning for Adaptive Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Heber, Gerd; Biswas, Rupak; Gao, Guang R.

    1999-01-01

    Classical mesh partitioning algorithms were designed for rather static situations, and their straightforward application in a dynamical framework may lead to unsatisfactory results, e.g., excessive data migration among processors. Furthermore, special attention should be paid to their amenability to parallelization. In this paper, a novel parallel method for the dynamic partitioning of adaptive unstructured meshes is described. It is based on a linear representation of the mesh using self-avoiding walks.

  7. 50 CFR 648.91 - Monkfish regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...-inch (25.4-cm) square or 12-inch (30.5-cm) diamond mesh throughout the codend for at least 45... gillnets used by a vessel fishing under a monkfish DAS is 10-inch (25.4-cm) diamond mesh, unless otherwise...

  8. 50 CFR 648.91 - Monkfish regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...-inch (25.4-cm) square or 12-inch (30.5-cm) diamond mesh throughout the codend for at least 45... gillnets used by a vessel fishing under a monkfish DAS is 10-inch (25.4-cm) diamond mesh, unless otherwise...

  9. Meshing of a Spiral Bevel Gearset with 3D Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Bibel, George D.; Handschuh, Robert

    1996-01-01

    Recent advances in spiral bevel gear geometry and finite element technology make it practical to conduct a structural analysis and analytically roll the gearset through mesh. With the advent of user specific programming linked to 3D solid modelers and mesh generators, model generation has become greatly automated. Contact algorithms available in general purpose finite element codes eliminate the need for the use and alignment of gap elements. Once the gearset is placed in mesh, user subroutines attached to the FE code easily roll the gearset through mesh. The method is described in detail. Preliminary results for a gearset segment showing the progression of the contact lineload is given as the gears roll through mesh.

  10. dc3dm: Software to efficiently form and apply a 3D DDM operator for a nonuniformly discretized rectangular planar fault

    NASA Astrophysics Data System (ADS)

    Bradley, A. M.

    2013-12-01

    My poster will describe dc3dm, a free open source software (FOSS) package that efficiently forms and applies the linear operator relating slip and traction components on a nonuniformly discretized rectangular planar fault in a homogeneous elastic (HE) half space. This linear operator implements what is called the displacement discontinuity method (DDM). The key properties of dc3dm are: 1. The mesh can be nonuniform. 2. Work and memory scale roughly linearly in the number of elements (rather than quadratically). 3. The order of accuracy of my method on a nonuniform mesh is the same as that of the standard method on a uniform mesh. Property 2 is achieved using my FOSS package hmmvp [AGU 2012]. A nonuniform mesh (property 1) is natural for some problems. For example, in a rate-state friction simulation, nucleation length, and so required element size, scales reciprocally with effective normal stress. Property 3 assures that if a nonuniform mesh is more efficient than a uniform mesh (in the sense of accuracy per element) at one level of mesh refinement, it will remain so at all further mesh refinements. I use the routine DC3D of Y. Okada, which calculates the stress tensor at a receiver resulting from a rectangular uniform dislocation source in an HE half space. On a uniform mesh, straightforward application of this Green's function (GF) yields a DDM I refer to as DDMu. On a nonuniform mesh, this same procedure leads to artifacts that degrade the order of accuracy of the DDM. I have developed a method I call IGA that implements the DDM using this GF for a nonuniformly discretized mesh having certain properties. Importantly, IGA's order of accuracy on a nonuniform mesh is the same as DDMu's on a uniform one. Boundary conditions can be periodic in the surface-parallel direction (in both directions if the GF is for a whole space), velocity on any side, and free surface. The mesh must have the following main property: each uniquely sized element must tile each element larger than itself. A mesh generated by a family of quadtrees has this property. Using multiple quadtrees that collectively cover the domain enables the elements to have a small aspect ratio. Mathematically, IGA works as follows. Let Mn be the nonuniform mesh. Define Mu to be the uniform mesh that is composed of the smallest element in Mn. Every element e in Mu has associated subelements in Mn that tile e. First, a linear operator Inu mapping data on Mn to Mu implements smooth (C^1) interpolation; I use cubic (Clough-Tocher) interpolation over a triangulation induced by Mn. Second, a linear operator Gu implements DDMu on Mu. Third, a linear operator Aun maps data on Mu to Mn. These three linear operators implement exact IGA (EIGA): Gn = Aun Gu Inu. Computationally, there are several more details. EIGA has the undesirable property that calculating one entry of Gn for receiver ri requires calculating multiple entries of Gu, no matter how far away from ri the smallest element is. Approximate IGA (AIGA) solves this problem by restricting EIGA to a neighborhood around each receiver. Associated with each neighborhood is a minimum element size s^i that indexes a family of operators Gu^i. The order of accuracy of AIGA is the same as that of EIGA and DDMu if each neighborhood is kept constant in spatial extent as the mesh is refined.

  11. Load Balancing Unstructured Adaptive Grids for CFD Problems

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Oliker, Leonid

    1996-01-01

    Mesh adaption is a powerful tool for efficient unstructured-grid computations but causes load imbalance among processors on a parallel machine. A dynamic load balancing method is presented that balances the workload across all processors with a global view. After each parallel tetrahedral mesh adaption, the method first determines if the new mesh is sufficiently unbalanced to warrant a repartitioning. If so, the adapted mesh is repartitioned, with new partitions assigned to processors so that the redistribution cost is minimized. The new partitions are accepted only if the remapping cost is compensated by the improved load balance. Results indicate that this strategy is effective for large-scale scientific computations on distributed-memory multiprocessors.

  12. Reliability optimization design of the gear modification coefficient based on the meshing stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Qianqian; Wang, Hui

    2018-04-01

    Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.

  13. Sirolimus-coated, poly(L-lactic acid)-modified polypropylene mesh with minimal intra-peritoneal adhesion formation in a rat model.

    PubMed

    Lu, S; Hu, W; Zhang, Z; Ji, Z; Zhang, T

    2018-05-18

    This study evaluated the manufacturing method and anti-adhesion properties of a new composite mesh in the rat model, which was made from sirolimus (SRL) grafts on a poly(L-lactic acid) (PLLA)-modified polypropylene (PP) hernia mesh. PLLA was first grafted onto argon-plasma-treated native PP mesh through catalysis of stannous chloride. SRL was grafted onto the surface of PP-PLLA meshes using catalysis of 1-(3-dimethylaminopropyl)-3-ethylcarbodiimide hydrochloride (EDC) and 4-dimethylaminopyridine (DMAP) in a CH 2 Cl 2 solvent. Sprague-Dawley female rats received either SRL-coated meshes, PP-PLLA meshes, or native PP meshes to repair abdominal wall defects. At different intervals, rats were euthanized by a lethal dose of chloral hydrate and adhesion area and tenacity were evaluated. Sections of the mesh with adjacent tissues were assessed histologically. Attenuated total reflection Fourier transformed infrared (ATR-FTIR) spectroscopy indicated the existence of a C=O group absorption peak (1724.1 cm -1 ), and scanning electron microscope morphological analysis indicated that the surface of the PP mesh was covered with SRL. Compared to the native PP meshes and PP-PLLA meshes, SRL-coated meshes demonstrated the greatest ability to decrease the formation of adhesions (P < 0.05) and inflammation. The SRL-coated composite mesh showed minimal formation of intra-abdominal adhesions in a rat model of abdominal wall defect repair.

  14. The finite cell method for polygonal meshes: poly-FCM

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2016-10-01

    In the current article, we extend the two-dimensional version of the finite cell method (FCM), which has so far only been used for structured quadrilateral meshes, to unstructured polygonal discretizations. Therefore, the adaptive quadtree-based numerical integration technique is reformulated and the notion of generalized barycentric coordinates is introduced. We show that the resulting polygonal (poly-)FCM approach retains the optimal rates of convergence if and only if the geometry of the structure is adequately resolved. The main advantage of the proposed method is that it inherits the ability of polygonal finite elements for local mesh refinement and for the construction of transition elements (e.g. conforming quadtree meshes without hanging nodes). These properties along with the performance of the poly-FCM are illustrated by means of several benchmark problems for both static and dynamic cases.

  15. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  16. Approximate static condensation algorithm for solving multi-material diffusion problems on meshes non-aligned with material interfaces

    DOE PAGES

    Kikinzon, Evgeny; Kuznetsov, Yuri; Lipnikov, Konstatin; ...

    2017-07-08

    In this study, we describe a new algorithm for solving multi-material diffusion problem when material interfaces are not aligned with the mesh. In this case interface reconstruction methods are used to construct approximate representation of interfaces between materials. They produce so-called multi-material cells, in which materials are represented by material polygons that contain only one material. The reconstructed interface is not continuous between cells. Finally, we suggest the new method for solving multi-material diffusion problems on such meshes and compare its performance with known homogenization methods.

  17. Parallel performance optimizations on unstructured mesh-based simulations

    DOE PAGES

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; ...

    2015-06-01

    This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  18. Multigrid techniques for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, D. J.

    1995-01-01

    An overview of current multigrid techniques for unstructured meshes is given. The basic principles of the multigrid approach are first outlined. Application of these principles to unstructured mesh problems is then described, illustrating various different approaches, and giving examples of practical applications. Advanced multigrid topics, such as the use of algebraic multigrid methods, and the combination of multigrid techniques with adaptive meshing strategies are dealt with in subsequent sections. These represent current areas of research, and the unresolved issues are discussed. The presentation is organized in an educational manner, for readers familiar with computational fluid dynamics, wishing to learn more about current unstructured mesh techniques.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    OWEN,STEVEN J.

    A method for decomposing a volume with a prescribed quadrilateral surface mesh, into a hexahedral-dominated mesh is proposed. With this method, known as Hex-Morphing (H-Morph), an initial tetrahedral mesh is provided. Tetrahedral are transformed and combined starting from the boundary and working towards the interior of the volume. The quadrilateral faces of the hexahedra are treated as internal surfaces, which can be recovered using constrained triangulation techniques. Implementation details of the edge and face recovery process are included. Examples and performance of the H-Morph algorithm are also presented.

  20. Arbitrary-level hanging nodes for adaptive hphp-FEM approximations in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavel Kus; Pavel Solin; David Andrs

    2014-11-01

    In this paper we discuss constrained approximation with arbitrary-level hanging nodes in adaptive higher-order finite element methods (hphp-FEM) for three-dimensional problems. This technique enables using highly irregular meshes, and it greatly simplifies the design of adaptive algorithms as it prevents refinements from propagating recursively through the finite element mesh. The technique makes it possible to design efficient adaptive algorithms for purely hexahedral meshes. We present a detailed mathematical description of the method and illustrate it with numerical examples.

  1. Space-time VMS computation of wind-turbine rotor and tower aerodynamics

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; McIntyre, Spenser; Kostov, Nikolay; Kolesar, Ryan; Habluetzel, Casey

    2014-01-01

    We present the space-time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent flows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of flows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational flexibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  2. Space-Time VMS Computation of Wind-Turbine Rotor and Tower Aerodynamics

    NASA Astrophysics Data System (ADS)

    McIntyre, Spenser W.

    This thesis is on the space{time variational multiscale (ST-VMS) computation of wind-turbine rotor and tower aerodynamics. The rotor geometry is that of the NREL 5MW offshore baseline wind turbine. We compute with a given wind speed and a specified rotor speed. The computation is challenging because of the large Reynolds numbers and rotating turbulent ows, and computing the correct torque requires an accurate and meticulous numerical approach. The presence of the tower increases the computational challenge because of the fast, rotational relative motion between the rotor and tower. The ST-VMS method is the residual-based VMS version of the Deforming-Spatial-Domain/Stabilized ST (DSD/SST) method, and is also called "DSD/SST-VMST" method (i.e., the version with the VMS turbulence model). In calculating the stabilization parameters embedded in the method, we are using a new element length definition for the diffusion-dominated limit. The DSD/SST method, which was introduced as a general-purpose moving-mesh method for computation of ows with moving interfaces, requires a mesh update method. Mesh update typically consists of moving the mesh for as long as possible and remeshing as needed. In the computations reported here, NURBS basis functions are used for the temporal representation of the rotor motion, enabling us to represent the circular paths associated with that motion exactly and specify a constant angular velocity corresponding to the invariant speeds along those paths. In addition, temporal NURBS basis functions are used in representation of the motion and deformation of the volume meshes computed and also in remeshing. We name this "ST/NURBS Mesh Update Method (STNMUM)." The STNMUM increases computational efficiency in terms of computer time and storage, and computational exibility in terms of being able to change the time-step size of the computation. We use layers of thin elements near the blade surfaces, which undergo rigid-body motion with the rotor. We compare the results from computations with and without tower, and we also compare using NURBS and linear finite element basis functions in temporal representation of the mesh motion.

  3. Improved protein model quality assessments by changing the target function.

    PubMed

    Uziela, Karolis; Menéndez Hurtado, David; Shu, Nanjiang; Wallner, Björn; Elofsson, Arne

    2018-06-01

    Protein modeling quality is an important part of protein structure prediction. We have for more than a decade developed a set of methods for this problem. We have used various types of description of the protein and different machine learning methodologies. However, common to all these methods has been the target function used for training. The target function in ProQ describes the local quality of a residue in a protein model. In all versions of ProQ the target function has been the S-score. However, other quality estimation functions also exist, which can be divided into superposition- and contact-based methods. The superposition-based methods, such as S-score, are based on a rigid body superposition of a protein model and the native structure, while the contact-based methods compare the local environment of each residue. Here, we examine the effects of retraining our latest predictor, ProQ3D, using identical inputs but different target functions. We find that the contact-based methods are easier to predict and that predictors trained on these measures provide some advantages when it comes to identifying the best model. One possible reason for this is that contact based methods are better at estimating the quality of multi-domain targets. However, training on the S-score gives the best correlation with the GDT_TS score, which is commonly used in CASP to score the global model quality. To take the advantage of both of these features we provide an updated version of ProQ3D that predicts local and global model quality estimates based on different quality estimates. © 2018 Wiley Periodicals, Inc.

  4. Finite-element 3D simulation tools for high-current relativistic electron beams

    NASA Astrophysics Data System (ADS)

    Humphries, Stanley; Ekdahl, Carl

    2002-08-01

    The DARHT second-axis injector is a challenge for computer simulations. Electrons are subject to strong beam-generated forces. The fields are fully three-dimensional and accurate calculations at surfaces are critical. We describe methods applied in OmniTrak, a 3D finite-element code suite that can address DARHT and the full range of charged-particle devices. The system handles mesh generation, electrostatics, magnetostatics and self-consistent particle orbits. The MetaMesh program generates meshes of conformal hexahedrons to fit any user geometry. The code has the unique ability to create structured conformal meshes with cubic logic. Organized meshes offer advantages in speed and memory utilization in the orbit and field solutions. OmniTrak is a versatile charged-particle code that handles 3D electric and magnetic field solutions on independent meshes. The program can update both 3D field solutions from the calculated beam space-charge and current-density. We shall describe numerical methods for orbit tracking on a hexahedron mesh. Topics include: 1) identification of elements along the particle trajectory, 2) fast searches and adaptive field calculations, 3) interpolation methods to terminate orbits on material surfaces, 4) automatic particle generation on multiple emission surfaces to model space-charge-limited emission and field emission, 5) flexible Child law algorithms, 6) implementation of the dual potential model for 3D magnetostatics, and 7) assignment of charge and current from model particle orbits for self-consistent fields.

  5. An annular superposition integral for axisymmetric radiators

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2007-01-01

    A fast integral expression for computing the nearfield pressure is derived for axisymmetric radiators. This method replaces the sum of contributions from concentric annuli with an exact double integral that converges much faster than methods that evaluate the Rayleigh-Sommerfeld integral or the generalized King integral. Expressions are derived for plane circular pistons using both continuous wave and pulsed excitations. Several commonly used apodization schemes for the surface velocity distribution are considered, including polynomial functions and a “smooth piston” function. The effect of different apodization functions on the spectral content of the wave field is explored. Quantitative error and time comparisons between the new method, the Rayleigh-Sommerfeld integral, and the generalized King integral are discussed. At all error levels considered, the annular superposition method achieves a speed-up of at least a factor of 4 relative to the point-source method and a factor of 3 relative to the generalized King integral without increasing the computational complexity. PMID:17348500

  6. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1995-01-01

    In recent work we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered on Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM), is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence.

  7. Unstructured Polyhedral Mesh Thermal Radiation Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmer, T.S.; Zika, M.R.; Madsen, N.K.

    2000-07-27

    Unstructured mesh particle transport and diffusion methods are gaining wider acceptance as mesh generation, scientific visualization and linear solvers improve. This paper describes an algorithm that is currently being used in the KULL code at Lawrence Livermore National Laboratory to solve the radiative transfer equations. The algorithm employs a point-centered diffusion discretization on arbitrary polyhedral meshes in 3D. We present the results of a few test problems to illustrate the capabilities of the radiation diffusion module.

  8. Management of Complex Abdominal Wall Defects Associated with Penetrating Abdominal Trauma

    DTIC Science & Technology

    2014-05-09

    recruitment): a new method of wound closure. Ann Plast Surg 2005;55:660–4. 8 Ramirez OM, Ruas E, Dellon AL. ‘Components separation’ method for closure of...patients with open abdomens closed by either permanent mesh, vicryl mesh or a modification of Ramirez ’ original method of components separation. These

  9. DeepMeSH: deep semantic representation for improving large-scale MeSH indexing.

    PubMed

    Peng, Shengwen; You, Ronghui; Wang, Hongning; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2016-06-15

    Medical Subject Headings (MeSH) indexing, which is to assign a set of MeSH main headings to citations, is crucial for many important tasks in biomedical text mining and information retrieval. Large-scale MeSH indexing has two challenging aspects: the citation side and MeSH side. For the citation side, all existing methods, including Medical Text Indexer (MTI) by National Library of Medicine and the state-of-the-art method, MeSHLabeler, deal with text by bag-of-words, which cannot capture semantic and context-dependent information well. We propose DeepMeSH that incorporates deep semantic information for large-scale MeSH indexing. It addresses the two challenges in both citation and MeSH sides. The citation side challenge is solved by a new deep semantic representation, D2V-TFIDF, which concatenates both sparse and dense semantic representations. The MeSH side challenge is solved by using the 'learning to rank' framework of MeSHLabeler, which integrates various types of evidence generated from the new semantic representation. DeepMeSH achieved a Micro F-measure of 0.6323, 2% higher than 0.6218 of MeSHLabeler and 12% higher than 0.5637 of MTI, for BioASQ3 challenge data with 6000 citations. The software is available upon request. zhusf@fudan.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  10. A Space-Time Conservation Element and Solution Element Method for Solving the Two- and Three-Dimensional Unsteady Euler Equations Using Quadrilateral and Hexahedral Meshes

    NASA Technical Reports Server (NTRS)

    Zhang, Zeng-Chan; Yu, S. T. John; Chang, Sin-Chung; Jorgenson, Philip (Technical Monitor)

    2001-01-01

    In this paper, we report a version of the Space-Time Conservation Element and Solution Element (CE/SE) Method in which the 2D and 3D unsteady Euler equations are simulated using structured or unstructured quadrilateral and hexahedral meshes, respectively. In the present method, mesh values of flow variables and their spatial derivatives are treated as independent unknowns to be solved for. At each mesh point, the value of a flow variable is obtained by imposing a flux conservation condition. On the other hand, the spatial derivatives are evaluated using a finite-difference/weighted-average procedure. Note that the present extension retains many key advantages of the original CE/SE method which uses triangular and tetrahedral meshes, respectively, for its 2D and 3D applications. These advantages include efficient parallel computing ease of implementing non-reflecting boundary conditions, high-fidelity resolution of shocks and waves, and a genuinely multidimensional formulation without using a dimensional-splitting approach. In particular, because Riemann solvers, the cornerstones of the Godunov-type upwind schemes, are not needed to capture shocks, the computational logic of the present method is considerably simpler. To demonstrate the capability of the present method, numerical results are presented for several benchmark problems including oblique shock reflection, supersonic flow over a wedge, and a 3D detonation flow.

  11. Transport of phase space densities through tetrahedral meshes using discrete flow mapping

    NASA Astrophysics Data System (ADS)

    Bajars, Janis; Chappell, David J.; Søndergaard, Niels; Tanner, Gregor

    2017-01-01

    Discrete flow mapping was recently introduced as an efficient ray based method determining wave energy distributions in complex built up structures. Wave energy densities are transported along ray trajectories through polygonal mesh elements using a finite dimensional approximation of a ray transfer operator. In this way the method can be viewed as a smoothed ray tracing method defined over meshed surfaces. Many applications require the resolution of wave energy distributions in three-dimensional domains, such as in room acoustics, underwater acoustics and for electromagnetic cavity problems. In this work we extend discrete flow mapping to three-dimensional domains by propagating wave energy densities through tetrahedral meshes. The geometric simplicity of the tetrahedral mesh elements is utilised to efficiently compute the ray transfer operator using a mixture of analytic and spectrally accurate numerical integration. The important issue of how to choose a suitable basis approximation in phase space whilst maintaining a reasonable computational cost is addressed via low order local approximations on tetrahedral faces in the position coordinate and high order orthogonal polynomial expansions in momentum space.

  12. GPU surface extraction using the closest point embedding

    NASA Astrophysics Data System (ADS)

    Kim, Mark; Hansen, Charles

    2015-01-01

    Isosurface extraction is a fundamental technique used for both surface reconstruction and mesh generation. One method to extract well-formed isosurfaces is a particle system; unfortunately, particle systems can be slow. In this paper, we introduce an enhanced parallel particle system that uses the closest point embedding as the surface representation to speedup the particle system for isosurface extraction. The closest point embedding is used in the Closest Point Method (CPM), a technique that uses a standard three dimensional numerical PDE solver on two dimensional embedded surfaces. To fully take advantage of the closest point embedding, it is coupled with a Barnes-Hut tree code on the GPU. This new technique produces well-formed, conformal unstructured triangular and tetrahedral meshes from labeled multi-material volume datasets. Further, this new parallel implementation of the particle system is faster than any known methods for conformal multi-material mesh extraction. The resulting speed-ups gained in this implementation can reduce the time from labeled data to mesh from hours to minutes and benefits users, such as bioengineers, who employ triangular and tetrahedral meshes

  13. Automated quadrilateral surface discretization method and apparatus usable to generate mesh in a finite element analysis system

    DOEpatents

    Blacker, Teddy D.

    1994-01-01

    An automatic quadrilateral surface discretization method and apparatus is provided for automatically discretizing a geometric region without decomposing the region. The automated quadrilateral surface discretization method and apparatus automatically generates a mesh of all quadrilateral elements which is particularly useful in finite element analysis. The generated mesh of all quadrilateral elements is boundary sensitive, orientation insensitive and has few irregular nodes on the boundary. A permanent boundary of the geometric region is input and rows are iteratively layered toward the interior of the geometric region. Also, an exterior permanent boundary and an interior permanent boundary for a geometric region may be input and the rows are iteratively layered inward from the exterior boundary in a first counter clockwise direction while the rows are iteratively layered from the interior permanent boundary toward the exterior of the region in a second clockwise direction. As a result, a high quality mesh for an arbitrary geometry may be generated with a technique that is robust and fast for complex geometric regions and extreme mesh gradations.

  14. Characterization of the mechanism of drug-drug interactions from PubMed using MeSH terms.

    PubMed

    Lu, Yin; Figler, Bryan; Huang, Hong; Tu, Yi-Cheng; Wang, Ju; Cheng, Feng

    2017-01-01

    Identifying drug-drug interaction (DDI) is an important topic for the development of safe pharmaceutical drugs and for the optimization of multidrug regimens for complex diseases such as cancer and HIV. There have been about 150,000 publications on DDIs in PubMed, which is a great resource for DDI studies. In this paper, we introduced an automatic computational method for the systematic analysis of the mechanism of DDIs using MeSH (Medical Subject Headings) terms from PubMed literature. MeSH term is a controlled vocabulary thesaurus developed by the National Library of Medicine for indexing and annotating articles. Our method can effectively identify DDI-relevant MeSH terms such as drugs, proteins and phenomena with high accuracy. The connections among these MeSH terms were investigated by using co-occurrence heatmaps and social network analysis. Our approach can be used to visualize relationships of DDI terms, which has the potential to help users better understand DDIs. As the volume of PubMed records increases, our method for automatic analysis of DDIs from the PubMed database will become more accurate.

  15. Extending a CAD-Based Cartesian Mesh Generator for the Lattice Boltzmann Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantrell, J Nathan; Inclan, Eric J; Joshi, Abhijit S

    2012-01-01

    This paper describes the development of a custom preprocessor for the PaRAllel Thermal Hydraulics simulations using Advanced Mesoscopic methods (PRATHAM) code based on an open-source mesh generator, CartGen [1]. PRATHAM is a three-dimensional (3D) lattice Boltzmann method (LBM) based parallel flow simulation software currently under development at the Oak Ridge National Laboratory. The LBM algorithm in PRATHAM requires a uniform, coordinate system-aligned, non-body-fitted structured mesh for its computational domain. CartGen [1], which is a GNU-licensed open source code, already comes with some of the above needed functionalities. However, it needs to be further extended to fully support the LBM specificmore » preprocessing requirements. Therefore, CartGen is being modified to (i) be compiler independent while converting a neutral-format STL (Stereolithography) CAD geometry to a uniform structured Cartesian mesh, (ii) provide a mechanism for PRATHAM to import the mesh and identify the fluid/solid domains, and (iii) provide a mechanism to visually identify and tag the domain boundaries on which to apply different boundary conditions.« less

  16. Adjoint Sensitivity Computations for an Embedded-Boundary Cartesian Mesh Method and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis,Michael J.

    2006-01-01

    Cartesian-mesh methods are perhaps the most promising approach for addressing the issues of flow solution automation for aerodynamic design problems. In these methods, the discretization of the wetted surface is decoupled from that of the volume mesh. This not only enables fast and robust mesh generation for geometry of arbitrary complexity, but also facilitates access to geometry modeling and manipulation using parametric Computer-Aided Design (CAD) tools. Our goal is to combine the automation capabilities of Cartesian methods with an eficient computation of design sensitivities. We address this issue using the adjoint method, where the computational cost of the design sensitivities, or objective function gradients, is esseutially indepeudent of the number of design variables. In previous work, we presented an accurate and efficient algorithm for the solution of the adjoint Euler equations discretized on Cartesian meshes with embedded, cut-cell boundaries. Novel aspects of the algorithm included the computation of surface shape sensitivities for triangulations based on parametric-CAD models and the linearization of the coupling between the surface triangulation and the cut-cells. The objective of the present work is to extend our adjoint formulation to problems involving general shape changes. Central to this development is the computation of volume-mesh sensitivities to obtain a reliable approximation of the objective finction gradient. Motivated by the success of mesh-perturbation schemes commonly used in body-fitted unstructured formulations, we propose an approach based on a local linearization of a mesh-perturbation scheme similar to the spring analogy. This approach circumvents most of the difficulties that arise due to non-smooth changes in the cut-cell layer as the boundary shape evolves and provides a consistent approximation tot he exact gradient of the discretized abjective function. A detailed gradient accurace study is presented to verify our approach. Thereafter, we focus on a shape optimization problem for an Apollo-like reentry capsule. The optimization seeks to enhance the lift-to-drag ratio of the capsule by modifyjing the shape of its heat-shield in conjunction with a center-of-gravity (c.g.) offset. This multipoint and multi-objective optimization problem is used to demonstrate the overall effectiveness of the Cartesian adjoint method for addressing the issues of complex aerodynamic design. This abstract presents only a brief outline of the numerical method and results; full details will be given in the final paper.

  17. Homogeneous partial differential equations for superpositions of indeterminate functions of several variables

    NASA Astrophysics Data System (ADS)

    Asai, Kazuto

    2009-02-01

    We determine essentially all partial differential equations satisfied by superpositions of tree type and of a further special type. These equations represent necessary and sufficient conditions for an analytic function to be locally expressible as an analytic superposition of the type indicated. The representability of a real analytic function by a superposition of this type is independent of whether that superposition involves real-analytic functions or C^{\\rho}-functions, where the constant \\rho is determined by the structure of the superposition. We also prove that the function u defined by u^n=xu^a+yu^b+zu^c+1 is generally non-representable in any real (resp. complex) domain as f\\bigl(g(x,y),h(y,z)\\bigr) with twice differentiable f and differentiable g, h (resp. analytic f, g, h).

  18. Analysis of ground-motion simulation big data

    NASA Astrophysics Data System (ADS)

    Maeda, T.; Fujiwara, H.

    2016-12-01

    We developed a parallel distributed processing system which applies a big data analysis to the large-scale ground motion simulation data. The system uses ground-motion index values and earthquake scenario parameters as input. We used peak ground velocity value and velocity response spectra as the ground-motion index. The ground-motion index values are calculated from our simulation data. We used simulated long-period ground motion waveforms at about 80,000 meshes calculated by a three dimensional finite difference method based on 369 earthquake scenarios of a great earthquake in the Nankai Trough. These scenarios were constructed by considering the uncertainty of source model parameters such as source area, rupture starting point, asperity location, rupture velocity, fmax and slip function. We used these parameters as the earthquake scenario parameter. The system firstly carries out the clustering of the earthquake scenario in each mesh by the k-means method. The number of clusters is determined in advance using a hierarchical clustering by the Ward's method. The scenario clustering results are converted to the 1-D feature vector. The dimension of the feature vector is the number of scenario combination. If two scenarios belong to the same cluster the component of the feature vector is 1, and otherwise the component is 0. The feature vector shows a `response' of mesh to the assumed earthquake scenario group. Next, the system performs the clustering of the mesh by k-means method using the feature vector of each mesh previously obtained. Here the number of clusters is arbitrarily given. The clustering of scenarios and meshes are performed by parallel distributed processing with Hadoop and Spark, respectively. In this study, we divided the meshes into 20 clusters. The meshes in each cluster are geometrically concentrated. Thus this system can extract regions, in which the meshes have similar `response', as clusters. For each cluster, it is possible to determine particular scenario parameters which characterize the cluster. In other word, by utilizing this system, we can obtain critical scenario parameters of the ground-motion simulation for each evaluation point objectively. This research was supported by CREST, JST.

  19. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  20. A mesh generation and machine learning framework for Drosophila gene expression pattern image analysis

    PubMed Central

    2013-01-01

    Background Multicellular organisms consist of cells of many different types that are established during development. Each type of cell is characterized by the unique combination of expressed gene products as a result of spatiotemporal gene regulation. Currently, a fundamental challenge in regulatory biology is to elucidate the gene expression controls that generate the complex body plans during development. Recent advances in high-throughput biotechnologies have generated spatiotemporal expression patterns for thousands of genes in the model organism fruit fly Drosophila melanogaster. Existing qualitative methods enhanced by a quantitative analysis based on computational tools we present in this paper would provide promising ways for addressing key scientific questions. Results We develop a set of computational methods and open source tools for identifying co-expressed embryonic domains and the associated genes simultaneously. To map the expression patterns of many genes into the same coordinate space and account for the embryonic shape variations, we develop a mesh generation method to deform a meshed generic ellipse to each individual embryo. We then develop a co-clustering formulation to cluster the genes and the mesh elements, thereby identifying co-expressed embryonic domains and the associated genes simultaneously. Experimental results indicate that the gene and mesh co-clusters can be correlated to key developmental events during the stages of embryogenesis we study. The open source software tool has been made available at http://compbio.cs.odu.edu/fly/. Conclusions Our mesh generation and machine learning methods and tools improve upon the flexibility, ease-of-use and accuracy of existing methods. PMID:24373308

  1. Calculation of steady and unsteady transonic flow using a Cartesian mesh and gridless boundary conditions with application to aeroelasticity

    NASA Astrophysics Data System (ADS)

    Kirshman, David

    A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.

  2. A software platform for continuum modeling of ion channels based on unstructured mesh

    NASA Astrophysics Data System (ADS)

    Tu, B.; Bai, S. Y.; Chen, M. X.; Xie, Y.; Zhang, L. B.; Lu, B. Z.

    2014-01-01

    Most traditional continuum molecular modeling adopted finite difference or finite volume methods which were based on a structured mesh (grid). Unstructured meshes were only occasionally used, but an increased number of applications emerge in molecular simulations. To facilitate the continuum modeling of biomolecular systems based on unstructured meshes, we are developing a software platform with tools which are particularly beneficial to those approaches. This work describes the software system specifically for the simulation of a typical, complex molecular procedure: ion transport through a three-dimensional channel system that consists of a protein and a membrane. The platform contains three parts: a meshing tool chain for ion channel systems, a parallel finite element solver for the Poisson-Nernst-Planck equations describing the electrodiffusion process of ion transport, and a visualization program for continuum molecular modeling. The meshing tool chain in the platform, which consists of a set of mesh generation tools, is able to generate high-quality surface and volume meshes for ion channel systems. The parallel finite element solver in our platform is based on the parallel adaptive finite element package PHG which wass developed by one of the authors [1]. As a featured component of the platform, a new visualization program, VCMM, has specifically been developed for continuum molecular modeling with an emphasis on providing useful facilities for unstructured mesh-based methods and for their output analysis and visualization. VCMM provides a graphic user interface and consists of three modules: a molecular module, a meshing module and a numerical module. A demonstration of the platform is provided with a study of two real proteins, the connexin 26 and hemolysin ion channels.

  3. The numerical simulation study of hemodynamics of the new dense-mesh stent

    NASA Astrophysics Data System (ADS)

    Ma, Jiali; Yuan, Zhishan; Yu, Xuebao; Feng, Zhaowei; Miao, Weidong; Xu, Xueli; Li, Juntao

    2017-09-01

    The treatment of aortic aneurysm in new dense mesh stent is based on the principle of hemodynamic changes. But the mechanism is not yet very clear. This paper analyzed and calculated the hemodynamic situation before and after the new dense mesh stent implanting by the method of numerical simulation. The results show the dense mesh stent changed and impacted the blood flow in the aortic aneurysm. The changes include significant decrement of blood velocity, pressure and shear forces, while ensuring blood can supply branches, which means the new dense mesh stent's hemodynamic mechanism in the treatment of aortic aneurysm is clearer. It has very important significance in developing new dense mesh stent in order to cure aortic aneurysm.

  4. A methodology to find the elementary landscape decomposition of combinatorial optimization problems.

    PubMed

    Chicano, Francisco; Whitley, L Darrell; Alba, Enrique

    2011-01-01

    A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.

  5. Adaptive Mesh Refinement for Microelectronic Device Design

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Lou, John; Norton, Charles

    1999-01-01

    Finite element and finite volume methods are used in a variety of design simulations when it is necessary to compute fields throughout regions that contain varying materials or geometry. Convergence of the simulation can be assessed by uniformly increasing the mesh density until an observable quantity stabilizes. Depending on the electrical size of the problem, uniform refinement of the mesh may be computationally infeasible due to memory limitations. Similarly, depending on the geometric complexity of the object being modeled, uniform refinement can be inefficient since regions that do not need refinement add to the computational expense. In either case, convergence to the correct (measured) solution is not guaranteed. Adaptive mesh refinement methods attempt to selectively refine the region of the mesh that is estimated to contain proportionally higher solution errors. The refinement may be obtained by decreasing the element size (h-refinement), by increasing the order of the element (p-refinement) or by a combination of the two (h-p refinement). A successful adaptive strategy refines the mesh to produce an accurate solution measured against the correct fields without undue computational expense. This is accomplished by the use of a) reliable a posteriori error estimates, b) hierarchal elements, and c) automatic adaptive mesh generation. Adaptive methods are also useful when problems with multi-scale field variations are encountered. These occur in active electronic devices that have thin doped layers and also when mixed physics is used in the calculation. The mesh needs to be fine at and near the thin layer to capture rapid field or charge variations, but can coarsen away from these layers where field variations smoothen and charge densities are uniform. This poster will present an adaptive mesh refinement package that runs on parallel computers and is applied to specific microelectronic device simulations. Passive sensors that operate in the infrared portion of the spectrum as well as active device simulations that model charge transport and Maxwell's equations will be presented.

  6. Tangle-Free Mesh Motion for Ablation Simulations

    NASA Technical Reports Server (NTRS)

    Droba, Justin

    2016-01-01

    Problems involving mesh motion-which should not be mistakenly associated with moving mesh methods, a class of adaptive mesh redistribution techniques-are of critical importance in numerical simulations of the thermal response of melting and ablative materials. Ablation is the process by which material vaporizes or otherwise erodes due to strong heating. Accurate modeling of such materials is of the utmost importance in design of passive thermal protection systems ("heatshields") for spacecraft, the layer of the vehicle that ensures survival of crew and craft during re-entry. In an explicit mesh motion approach, a complete thermal solve is first performed. Afterwards, the thermal response is used to determine surface recession rates. These values are then used to generate boundary conditions for an a posteriori correction designed to update the location of the mesh nodes. Most often, linear elastic or biharmonic equations are used to model this material response, traditionally in a finite element framework so that complex geometries can be simulated. A simple scheme for moving the boundary nodes involves receding along the surface normals. However, for all but the simplest problem geometries, evolution in time following such a scheme will eventually bring the mesh to intersect and "tangle" with itself, inducing failure. This presentation demonstrates a comprehensive and sophisticated scheme that analyzes the local geometry of each node with help from user-provided clues to eliminate the tangle and enable simulations on a wide-class of difficult problem geometries. The method developed is demonstrated for linear elastic equations but is general enough that it may be adapted to other modeling equations. The presentation will explicate the inner workings of the tangle-free mesh motion algorithm for both two and three-dimensional meshes. It will show abstract examples of the method's success, including a verification problem that demonstrates its accuracy and correctness. The focus of the presentation will be on the algorithm; specifics on how the techniques may be used in spacecraft design will be not discussed.

  7. Orbital Reconstruction: Patient-Specific Orbital Floor Reconstruction Using a Mirroring Technique and a Customized Titanium Mesh.

    PubMed

    Tarsitano, Achille; Badiali, Giovanni; Pizzigallo, Angelo; Marchetti, Claudio

    2016-10-01

    Enophthalmos is a severe complication of primary reconstruction of orbital floor fractures. The goal of secondary reconstruction procedures is to restore symmetrical globe positions to recover function and aesthetics. The authors propose a new method of orbital floor reconstruction using a mirroring technique and a customized titanium mesh, printed using a direct metal laser-sintering method. This reconstructive protocol involves 4 steps: mirroring of the healthy orbit at the affected site, virtual design of a patient-specific orbital floor mesh, CAM procedures for direct laser-sintering of the customized titanium mesh, and surgical insertion of the device. Using a computed tomography data set, the normal, uninjured side of the craniofacial skeleton was reflected onto the contralateral injured side, and a reconstructive orbital floor mesh was designed virtually on the mirrored orbital bone surface. The solid-to-layer files of the mesh were then manufactured using direct metal laser sintering, which resolves the shaping and bending biases inherent in the indirect method. An intraoperative navigation system ensured accuracy of the entire procedure. Clinical outcomes were assessed using 3dMD photogrammetry and computed tomography data in 7 treated patients. The technique described here appears to be a viable method to correct complex orbital floor defects needing delayed reconstruction. This study represents the first step in the development of a wider experimental protocol for orbital floor reconstruction using computer-assisted design-computer-assisted manufacturing technology.

  8. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  9. Direct Discrete Method for Neutronic Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vosoughi, Naser; Akbar Salehi, Ali; Shahriari, Majid

    The objective of this paper is to introduce a new direct method for neutronic calculations. This method which is named Direct Discrete Method, is simpler than the neutron Transport equation and also more compatible with physical meaning of problems. This method is based on physic of problem and with meshing of the desired geometry, writing the balance equation for each mesh intervals and with notice to the conjunction between these mesh intervals, produce the final discrete equations series without production of neutron transport differential equation and mandatory passing from differential equation bridge. We have produced neutron discrete equations for amore » cylindrical shape with two boundary conditions in one group energy. The correction of the results from this method are tested with MCNP-4B code execution. (authors)« less

  10. Characterizing mesh size distributions (MSDs) in thermosetting materials using a high-pressure system.

    PubMed

    Larché, J-F; Seynaeve, J-M; Voyard, G; Bussière, P-O; Gardette, J-L

    2011-04-21

    The thermoporosimetry method was adapted to determine the mesh size distribution of an acrylate thermoset clearcoat. This goal was achieved by increasing the solvent rate transfer by increasing the pressure and temperature. A comparison of the results obtained using this approach with those obtained by DMA (dynamic mechanical analysis) underlined the accuracy of thermoporosimetry in characterizing the macromolecular architecture of thermosets. The thermoporosimetry method was also used to analyze the effects of photoaging on cross-linking, which result from the photodegradation of the acrylate thermoset. It was found that the formation of a three-dimensional network followed by densification generates a modification of the average mesh size that leads to a dramatic decrease of the meshes of the polymer.

  11. Trimming Line Design using New Development Method and One Step FEM

    NASA Astrophysics Data System (ADS)

    Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol

    2005-08-01

    In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.

  12. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  13. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel Benchmarks (NPB). In this paper, we present some interesting performance results of ow OpenMP parallel implementation on different architectures such as the SGI Origin2000, SGI Altix, and Cray MTA-2.

  14. High-performance metal mesh/graphene hybrid films using prime-location and metal-doped graphene.

    PubMed

    Min, Jung-Hong; Jeong, Woo-Lim; Kwak, Hoe-Min; Lee, Dong-Seon

    2017-08-31

    We introduce high-performance metal mesh/graphene hybrid transparent conductive layers (TCLs) using prime-location and metal-doped graphene in near-ultraviolet light-emitting diodes (NUV LEDs). Despite the transparency and sheet resistance values being similar for hybrid TCLs, there were huge differences in the NUV LEDs' electrical and optical properties depending on the location of the graphene layer. We achieved better physical stability and current spreading when the graphene layer was located beneath the metal mesh, in direct contact with the p-GaN layer. We further improved the contact properties by adding a very thin Au mesh between the thick Ag mesh and the graphene layer to produce a dual-layered metal mesh. The Au mesh effectively doped the graphene layer to create a p-type electrode. Using Raman spectra, work function variations, and the transfer length method (TLM), we verified the effect of doping the graphene layer after depositing a very thin metal layer on the graphene layers. From our results, we suggest that the nature of the contact is an important criterion for improving the electrical and optical performance of hybrid TCLs, and the method of doping graphene layers provides new opportunities for solving contact issues in other semiconductor devices.

  15. Scrambled coherent superposition for enhanced optical fiber communication in the nonlinear transmission regime.

    PubMed

    Liu, Xiang; Chandrasekhar, S; Winzer, P J; Chraplyvy, A R; Tkach, R W; Zhu, B; Taunay, T F; Fishteyn, M; DiGiovanni, D J

    2012-08-13

    Coherent superposition of light waves has long been used in various fields of science, and recent advances in digital coherent detection and space-division multiplexing have enabled the coherent superposition of information-carrying optical signals to achieve better communication fidelity on amplified-spontaneous-noise limited communication links. However, fiber nonlinearity introduces highly correlated distortions on identical signals and diminishes the benefit of coherent superposition in nonlinear transmission regime. Here we experimentally demonstrate that through coordinated scrambling of signal constellations at the transmitter, together with appropriate unscrambling at the receiver, the full benefit of coherent superposition is retained in the nonlinear transmission regime of a space-diversity fiber link based on an innovatively engineered multi-core fiber. This scrambled coherent superposition may provide the flexibility of trading communication capacity for performance in future optical fiber networks, and may open new possibilities in high-performance and secure optical communications.

  16. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement

    PubMed Central

    Garcia-Cantero, Juan J.; Brito, Juan P.; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells’ overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma’s morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes. PMID:28690511

  17. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  18. Validation of GPU-accelerated superposition-convolution dose computations for the Small Animal Radiation Research Platform.

    PubMed

    Cho, Nathan; Tsiamas, Panagiotis; Velarde, Esteban; Tryggestad, Erik; Jacques, Robert; Berbeco, Ross; McNutt, Todd; Kazanzides, Peter; Wong, John

    2018-05-01

    The Small Animal Radiation Research Platform (SARRP) has been developed for conformal microirradiation with on-board cone beam CT (CBCT) guidance. The graphics processing unit (GPU)-accelerated Superposition-Convolution (SC) method for dose computation has been integrated into the treatment planning system (TPS) for SARRP. This paper describes the validation of the SC method for the kilovoltage energy by comparing with EBT2 film measurements and Monte Carlo (MC) simulations. MC data were simulated by EGSnrc code with 3 × 10 8 -1.5 × 10 9 histories, while 21 photon energy bins were used to model the 220 kVp x-rays in the SC method. Various types of phantoms including plastic water, cork, graphite, and aluminum were used to encompass the range of densities of mouse organs. For the comparison, percentage depth dose (PDD) of SC, MC, and film measurements were analyzed. Cross beam (x,y) dosimetric profiles of SC and film measurements are also presented. Correction factors (CFz) to convert SC to MC dose-to-medium are derived from the SC and MC simulations in homogeneous phantoms of aluminum and graphite to improve the estimation. The SC method produces dose values that are within 5% of film measurements and MC simulations in the flat regions of the profile. The dose is less accurate at the edges, due to factors such as geometric uncertainties of film placement and difference in dose calculation grids. The GPU-accelerated Superposition-Convolution dose computation method was successfully validated with EBT2 film measurements and MC calculations. The SC method offers much faster computation speed than MC and provides calculations of both dose-to-water in medium and dose-to-medium in medium. © 2018 American Association of Physicists in Medicine.

  19. The integration of a mesh reflector to a 15-foot box truss structure. Task 3: Box truss analysis and technology development

    NASA Technical Reports Server (NTRS)

    Bachtell, E. E.; Thiemet, W. F.; Morosow, G.

    1987-01-01

    To demonstrate the design and integration of a reflective mesh surface to a deployable truss structure, a mesh reflector was installed on a 15 foot box truss cube. The specific features demonstrated include: (1) sewing seams in reflective mesh; (2) mesh stretching to desired preload; (3) installation of surface tie cords; (4) installation of reflective surface on truss; (5) setting of reflective surface; (6) verification of surface shape/accuracy; (7) storage and deployment; (8) repeatability of reflector surface; and (9) comparison of surface with predicted shape using analytical methods developed under a previous task.

  20. Wind Farm LES Simulations Using an Overset Methodology

    NASA Astrophysics Data System (ADS)

    Ananthan, Shreyas; Yellapantula, Shashank

    2017-11-01

    Accurate simulation of wind farm wakes under realistic atmospheric inflow conditions and complex terrain requires modeling a wide range of length and time scales. The computational domain can span several kilometers while requiring mesh resolutions in O(10-6) to adequately resolve the boundary layer on the blade surface. Overset mesh methodology offers an attractive option to address the disparate range of length scales; it allows embedding body-confirming meshes around turbine geomtries within nested wake capturing meshes of varying resolutions necessary to accurately model the inflow turbulence and the resulting wake structures. Dynamic overset hole-cutting algorithms permit relative mesh motion that allow this nested mesh structure to track unsteady inflow direction changes, turbine control changes (yaw and pitch), and wake propagation. An LES model with overset mesh for localized mesh refinement is used to analyze wind farm wakes and performance and compared with local mesh refinements using non-conformal (hanging node) unstructured meshes. Turbine structures will be modeled using both actuator line approaches and fully-resolved structures to test the efficacy of overset methods for wind farm applications. Exascale Computing Project (ECP), Project Number: 17-SC-20-SC, a collaborative effort of two DOE organizations - the Office of Science and the National Nuclear Security Administration.

  1. Implicit solvers for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, Dimitri J.

    1991-01-01

    Implicit methods for unstructured mesh computations are developed and tested. The approximate system which arises from the Newton-linearization of the nonlinear evolution operator is solved by using the preconditioned generalized minimum residual technique. These different preconditioners are investigated: the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over-relaxation (SSOR). The preconditioners have been optimized to have good vectorization properties. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also investigated. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.

  2. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditionalmore » AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.« less

  3. A Hybrid Numerical Analysis Method for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Staroselsky, Alexander

    2001-01-01

    A new hybrid surface-integral-finite-element numerical scheme has been developed to model a three-dimensional crack propagating through a thin, multi-layered coating. The finite element method was used to model the physical state of the coating (far field), and the surface integral method was used to model the fatigue crack growth. The two formulations are coupled through the need to satisfy boundary conditions on the crack surface and the external boundary. The coupling is sufficiently weak that the surface integral mesh of the crack surface and the finite element mesh of the uncracked volume can be set up independently. Thus when modeling crack growth, the finite element mesh can remain fixed for the duration of the simulation as the crack mesh is advanced. This method was implemented to evaluate the feasibility of fabricating a structural health monitoring system for real-time detection of surface cracks propagating in engine components. In this work, the authors formulate the hybrid surface-integral-finite-element method and discuss the mechanical issues of implementing a structural health monitoring system in an aircraft engine environment.

  4. Bone Marrow–Derived Mesenchymal Stem Cells Enhance Bacterial Clearance and Preserve Bioprosthetic Integrity in a Model of Mesh Infection

    PubMed Central

    Criman, Erik T.; Kurata, Wendy E.; Matsumoto, Karen W.; Aubin, Harry T.; Campbell, Carmen E.

    2016-01-01

    Background: The reported incidence of mesh infection in contaminated operative fields is as high as 30% regardless of the material used. Recently, mesenchymal stem cells (MSCs) have been shown to possess favorable immunomodulatory properties and improve tissue incorporation when seeded onto bioprosthetics. The aim of this study was to evaluate whether seeding noncrosslinked bovine pericardium (Veritas Collagen Matrix) with allogeneic bone marrow–derived MSCs improves infection resistance in vivo after inoculation with Escherichia coli (E. coli). Methods: Rat bone marrow–derived MSCs at passage 3 were seeded onto bovine pericardium and cultured for 7 days before implantation. Additional rats (n = 24) were implanted subcutaneously with MSC-seeded or unseeded mesh and inoculated with 7 × 105 colony-forming units of E. coli or saline before wound closure (group 1, unseeded mesh/saline; group 2, unseeded mesh/E. coli; group 3, MSC-seeded mesh/E. coli; 8 rats per group). Meshes were explanted at 4 weeks and underwent microbiologic and histologic analyses. Results: MSC-seeded meshes inoculated with E. coli demonstrated superior bacterial clearance and preservation of mesh integrity compared with E. coli–inoculated unseeded meshes (87.5% versus 0% clearance; p = 0.001). Complete mesh degradation concurrent with abscess formation was observed in 100% of rats in the unseeded/E. coli group, which is in contrast to 12.5% of rats in the MSC-seeded/E. coli group. Histologic evaluation determined that remodeling characteristics of E. coli–inoculated MSC-seeded meshes were similar to those of uninfected meshes 4 weeks after implantation. Conclusions: Augmenting a bioprosthetic material with stem cells seems to markedly enhance resistance to bacterial infection in vivo and preserve mesh integrity. PMID:27482490

  5. Three-dimensional local ALE-FEM method for fluid flow in domains containing moving boundaries/objects interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Monayem, A. K. M.; Mazumder, H.

    2015-03-05

    A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is amore » fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.« less

  6. Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, Remi

    1992-01-01

    An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.

  7. User Manual for the PROTEUS Mesh Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Shemon, Emily R

    2016-09-19

    PROTEUS is built around a finite element representation of the geometry for visualization. In addition, the PROTEUS-SN solver was built to solve the even-parity transport equation on a finite element mesh provided as input. Similarly, PROTEUS-MOC and PROTEUS-NEMO were built to apply the method of characteristics on unstructured finite element meshes. Given the complexity of real world problems, experience has shown that using commercial mesh generator to create rather simple input geometries is overly complex and slow. As a consequence, significant effort has been put into place to create multiple codes that help assist in the mesh generation and manipulation.more » There are three input means to create a mesh in PROTEUS: UFMESH, GRID, and NEMESH. At present, the UFMESH is a simple way to generate two-dimensional Cartesian and hexagonal fuel assembly geometries. The UFmesh input allows for simple assembly mesh generation while the GRID input allows the generation of Cartesian, hexagonal, and regular triangular structured grid geometry options. The NEMESH is a way for the user to create their own mesh or convert another mesh file format into a PROTEUS input format. Given that one has an input mesh format acceptable for PROTEUS, we have constructed several tools which allow further mesh and geometry construction (i.e. mesh extrusion and merging). This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a given mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less

  8. The origin of non-classical effects in a one-dimensional superposition of coherent states

    NASA Technical Reports Server (NTRS)

    Buzek, V.; Knight, P. L.; Barranco, A. Vidiella

    1992-01-01

    We investigate the nature of the quantum fluctuations in a light field created by the superposition of coherent fields. We give a physical explanation (in terms of Wigner functions and phase-space interference) why the 1-D superposition of coherent states in the direction of the x-quadrature leads to the squeezing of fluctuations in the y-direction, and show that such a superposition can generate the squeezed vacuum and squeezed coherent states.

  9. Solving modal equations of motion with initial conditions using MSC/NASTRAN DMAP. Part 1: Implementing exact mode superposition

    NASA Technical Reports Server (NTRS)

    Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.

    1993-01-01

    Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.

  10. Mesh-free data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism

    DOE PAGES

    Slattery, Stuart R.

    2015-12-02

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  11. Robust and efficient overset grid assembly for partitioned unstructured meshes

    NASA Astrophysics Data System (ADS)

    Roget, Beatrice; Sitaraman, Jayanarayanan

    2014-03-01

    This paper presents a method to perform efficient and automated Overset Grid Assembly (OGA) on a system of overlapping unstructured meshes in a parallel computing environment where all meshes are partitioned into multiple mesh-blocks and processed on multiple cores. The main task of the overset grid assembler is to identify, in parallel, among all points in the overlapping mesh system, at which points the flow solution should be computed (field points), interpolated (receptor points), or ignored (hole points). Point containment search or donor search, an algorithm to efficiently determine the cell that contains a given point, is the core procedure necessary for accomplishing this task. Donor search is particularly challenging for partitioned unstructured meshes because of the complex irregular boundaries that are often created during partitioning.

  12. Manufacturing and characterization of encapsulated microfibers with different molecular weight poly(ε-caprolactone) (PCL) resins using a melt electrospinning technique

    NASA Astrophysics Data System (ADS)

    Lee, Jason K.; Ko, Junghyuk; Jun, Martin B. G.; Lee, Patrick C.

    2016-02-01

    Encapsulated structures of poly(ε-caprolactone) microfibers were successfully fabricated through two distinct melt electrospinning methods: melt coaxial and melt-blending electrospinning methods. Both methods resulted in encapsulated microfibers, but the resultant microfibers had different morphologies. Melt coaxial electrospinning formed a dual, semi-concentric structure, whereas melt-blending electrospinning resulted in an islands-in-a-sea fiber structure (i.e. a multiple-core structure). The encapsulated microfibers were produced using a custom-designed melt coaxial electrospinning device and the microfibers were characterized using a scanning electron microscope. To analyze the properties of the melt blended encapsulated fibers and coaxial fibers, the microfiber mesh specimens were collected. The mechanical properties of each microfiber mesh were analyzed through a tensile test. The coaxial microfiber meshes were post processed with a femtosecond laser machine to create dog-bone shaped tensile test specimens, while the melt blended microfiber meshes were kept as-fabricated. The tensile experiments undertaken with coaxial microfiber specimens resulted in an increase in tensile strength compared to 10 k and 45 k monolayer specimens. However, melt blended microfiber meshes did not result in an increase in tensile strength. The melt blended microfiber mesh results indicate that by using greater amounts of 45 k PCL resin within the microstructure, the resulting fibers obtain a higher tensile strength.

  13. Parallel three-dimensional magnetotelluric inversion using adaptive finite-element method. Part I: theory and synthetic study

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.

    2015-07-01

    This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.

  14. Gear fatigue crack prognosis using embedded model, gear dynamic model and fracture mechanics

    NASA Astrophysics Data System (ADS)

    Li, C. James; Lee, Hyungdae

    2005-07-01

    This paper presents a model-based method that predicts remaining useful life of a gear with a fatigue crack. The method consists of an embedded model to identify gear meshing stiffness from measured gear torsional vibration, an inverse method to estimate crack size from the estimated meshing stiffness; a gear dynamic model to simulate gear meshing dynamics and determine the dynamic load on the cracked tooth; and a fast crack propagation model to forecast the remaining useful life based on the estimated crack size and dynamic load. The fast crack propagation model was established to avoid repeated calculations of FEM and facilitate field deployment of the proposed method. Experimental studies were conducted to validate and demonstrate the feasibility of the proposed method for prognosis of a cracked gear.

  15. Squeezing effects applied in nonclassical superposition states for quantum nanoelectronic circuits

    NASA Astrophysics Data System (ADS)

    Choi, Jeong Ryeol

    2017-06-01

    Quantum characteristics of a driven series RLC nanoelectronic circuit whose capacitance varies with time are studied using an invariant operator method together with a unitary transformation approach. In particular, squeezing effects and nonclassical properties of a superposition state composed of two displaced squeezed number states of equal amplitude, but 180° out of phase, are investigated in detail. We applied our developments to a solvable specific case obtained from a suitable choice of time-dependent parameters. The pattern of mechanical oscillation of the amount of charges stored in the capacitor, which are initially displaced, has exhibited more or less distortion due to the influence of the time-varying parameters of the system. We have analyzed squeezing effects of the system from diverse different angles and such effects are illustrated for better understanding. It has been confirmed that the degree of squeezing is not constant, but varies with time depending on specific situations. We have found that quantum interference occurs whenever the two components of the superposition meet together during the time evolution of the probability density. This outcome signifies the appearance of nonclassical features of the system. Nonclassicality of dynamical systems can be a potential resource necessary for realizing quantum information technique. Indeed, such nonclassical features of superposition states are expected to play a key role in upcoming information science which has attracted renewed attention recently.

  16. Edge gradients evaluation for 2D hybrid finite volume method model

    USDA-ARS?s Scientific Manuscript database

    In this study, a two-dimensional depth-integrated hydrodynamic model was developed using FVM on a hybrid unstructured collocated mesh system. To alleviate the negative effects of mesh irregularity and non-uniformity, a conservative evaluation method for edge gradients based on the second-order Tayl...

  17. Interpolation methods and the accuracy of lattice-Boltzmann mesh refinement

    DOE PAGES

    Guzik, Stephen M.; Weisgraber, Todd H.; Colella, Phillip; ...

    2013-12-10

    A lattice-Boltzmann model to solve the equivalent of the Navier-Stokes equations on adap- tively refined grids is presented. A method for transferring information across interfaces between different grid resolutions was developed following established techniques for finite- volume representations. This new approach relies on a space-time interpolation and solving constrained least-squares problems to ensure conservation. The effectiveness of this method at maintaining the second order accuracy of lattice-Boltzmann is demonstrated through a series of benchmark simulations and detailed mesh refinement studies. These results exhibit smaller solution errors and improved convergence when compared with similar approaches relying only on spatial interpolation. Examplesmore » highlighting the mesh adaptivity of this method are also provided.« less

  18. An optimization-based approach for high-order accurate discretization of conservation laws with discontinuous solutions

    NASA Astrophysics Data System (ADS)

    Zahr, M. J.; Persson, P.-O.

    2018-07-01

    This work introduces a novel discontinuity-tracking framework for resolving discontinuous solutions of conservation laws with high-order numerical discretizations that support inter-element solution discontinuities, such as discontinuous Galerkin or finite volume methods. The proposed method aims to align inter-element boundaries with discontinuities in the solution by deforming the computational mesh. A discontinuity-aligned mesh ensures the discontinuity is represented through inter-element jumps while smooth basis functions interior to elements are only used to approximate smooth regions of the solution, thereby avoiding Gibbs' phenomena that create well-known stability issues. Therefore, very coarse high-order discretizations accurately resolve the piecewise smooth solution throughout the domain, provided the discontinuity is tracked. Central to the proposed discontinuity-tracking framework is a discrete PDE-constrained optimization formulation that simultaneously aligns the computational mesh with discontinuities in the solution and solves the discretized conservation law on this mesh. The optimization objective is taken as a combination of the deviation of the finite-dimensional solution from its element-wise average and a mesh distortion metric to simultaneously penalize Gibbs' phenomena and distorted meshes. It will be shown that our objective function satisfies two critical properties that are required for this discontinuity-tracking framework to be practical: (1) possesses a local minima at a discontinuity-aligned mesh and (2) decreases monotonically to this minimum in a neighborhood of radius approximately h / 2, whereas other popular discontinuity indicators fail to satisfy the latter. Another important contribution of this work is the observation that traditional reduced space PDE-constrained optimization solvers that repeatedly solve the conservation law at various mesh configurations are not viable in this context since severe overshoot and undershoot in the solution, i.e., Gibbs' phenomena, may make it impossible to solve the discrete conservation law on non-aligned meshes. Therefore, we advocate a gradient-based, full space solver where the mesh and conservation law solution converge to their optimal values simultaneously and therefore never require the solution of the discrete conservation law on a non-aligned mesh. The merit of the proposed method is demonstrated on a number of one- and two-dimensional model problems including the L2 projection of discontinuous functions, Burgers' equation with a discontinuous source term, transonic flow through a nozzle, and supersonic flow around a bluff body. We demonstrate optimal O (h p + 1) convergence rates in the L1 norm for up to polynomial order p = 6 and show that accurate solutions can be obtained on extremely coarse meshes.

  19. Active control of the lifetime of excited resonance states by means of laser pulses.

    PubMed

    García-Vela, A

    2012-04-07

    Quantum control of the lifetime of a system in an excited resonance state is investigated theoretically by creating coherent superpositions of overlapping resonances. This control scheme exploits the quantum interference occurring between the overlapping resonances, which can be controlled by varying the width of the laser pulse that creates the superposition state. The scheme is applied to a realistic model of the Br(2)(B)-Ne predissociation decay dynamics through a three-dimensional wave packet method. It is shown that extensive control of the system lifetime is achievable, both enhancing and damping it remarkably. An experimental realization of the control scheme is suggested.

  20. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cachemore » efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less

  1. Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2014-01-01

    Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.

  2. Runge-Kutta discontinuous Galerkin method using a new type of WENO limiters on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Zhong, Xinghui; Shu, Chi-Wang; Qiu, Jianxian

    2013-09-01

    In this paper we generalize a new type of limiters based on the weighted essentially non-oscillatory (WENO) finite volume methodology for the Runge-Kutta discontinuous Galerkin (RKDG) methods solving nonlinear hyperbolic conservation laws, which were recently developed in [32] for structured meshes, to two-dimensional unstructured triangular meshes. The key idea of such limiters is to use the entire polynomials of the DG solutions from the troubled cell and its immediate neighboring cells, and then apply the classical WENO procedure to form a convex combination of these polynomials based on smoothness indicators and nonlinear weights, with suitable adjustments to guarantee conservation. The main advantage of this new limiter is its simplicity in implementation, especially for the unstructured meshes considered in this paper, as only information from immediate neighbors is needed and the usage of complicated geometric information of the meshes is largely avoided. Numerical results for both scalar equations and Euler systems of compressible gas dynamics are provided to illustrate the good performance of this procedure.

  3. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES

    PubMed Central

    GILLETTE, ANDREW; RAND, ALEXANDER; BAJAJ, CHANDRAJIT

    2016-01-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties. PMID:28077939

  4. CONSTRUCTION OF SCALAR AND VECTOR FINITE ELEMENT FAMILIES ON POLYGONAL AND POLYHEDRAL MESHES.

    PubMed

    Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

    2016-10-01

    We combine theoretical results from polytope domain meshing, generalized barycentric coordinates, and finite element exterior calculus to construct scalar- and vector-valued basis functions for conforming finite element methods on generic convex polytope meshes in dimensions 2 and 3. Our construction recovers well-known bases for the lowest order Nédélec, Raviart-Thomas, and Brezzi-Douglas-Marini elements on simplicial meshes and generalizes the notion of Whitney forms to non-simplicial convex polygons and polyhedra. We show that our basis functions lie in the correct function space with regards to global continuity and that they reproduce the requisite polynomial differential forms described by finite element exterior calculus. We present a method to count the number of basis functions required to ensure these two key properties.

  5. Comparison of the fracture resistances of glass fiber mesh- and metal mesh-reinforced maxillary complete denture under dynamic fatigue loading

    PubMed Central

    2017-01-01

    PURPOSE The aim of this study was to investigate the effect of reinforcing materials on the fracture resistances of glass fiber mesh- and Cr–Co metal mesh-reinforced maxillary complete dentures under fatigue loading. MATERIALS AND METHODS Glass fiber mesh- and Cr–Co mesh-reinforced maxillary complete dentures were fabricated using silicone molds and acrylic resin. A control group was prepared with no reinforcement (n = 15 per group). After fatigue loading was applied using a chewing simulator, fracture resistance was measured by a universal testing machine. The fracture patterns were analyzed and the fractured surfaces were observed by scanning electron microscopy. RESULTS After cyclic loading, none of the dentures showed cracks or fractures. During fracture resistance testing, all unreinforced dentures experienced complete fracture. The mesh-reinforced dentures primarily showed posterior framework fracture. Deformation of the all-metal framework caused the metal mesh-reinforced denture to exhibit the highest fracture resistance, followed by the glass fiber mesh-reinforced denture (P<.05) and the control group (P<.05). The glass fiber mesh-reinforced denture primarily maintained its original shape with unbroken fibers. River line pattern of the control group, dimples and interdendritic fractures of the metal mesh group, and radial fracture lines of the glass fiber group were observed on the fractured surfaces. CONCLUSION The glass fiber mesh-reinforced denture exhibits a fracture resistance higher than that of the unreinforced denture, but lower than that of the metal mesh-reinforced denture because of the deformation of the metal mesh. The glass fiber mesh-reinforced denture maintains its shape even after fracture, indicating the possibility of easier repair. PMID:28243388

  6. Overcoming Sequence Misalignments with Weighted Structural Superposition

    PubMed Central

    Khazanov, Nickolay A.; Damm-Ganamet, Kelly L.; Quang, Daniel X.; Carlson, Heather A.

    2012-01-01

    An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian-weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD’s robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, SSM, CE, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics-scale analysis. HwRMSD can align homologs with low sequence identity and large conformational differences, cases where both sequence-based and structural-based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence-alignment method, substitution matrix, and gap parameters for each unique pair of homologs. PMID:22733542

  7. Adaptive and dynamic meshing methods for numerical simulations

    NASA Astrophysics Data System (ADS)

    Acikgoz, Nazmiye

    For the numerical simulation of many problems of engineering interest, it is desirable to have an automated mesh adaption tool capable of producing high quality meshes with an affordably low number of mesh points. This is important especially for problems, which are characterized by anisotropic features of the solution and require mesh clustering in the direction of high gradients. Another significant issue in meshing emerges in the area of unsteady simulations with moving boundaries or interfaces, where the motion of the boundary has to be accommodated by deforming the computational grid. Similarly, there exist problems where current mesh needs to be adapted to get more accurate solutions because either the high gradient regions are initially predicted inaccurately or they change location throughout the simulation. To solve these problems, we propose three novel procedures. For this purpose, in the first part of this work, we present an optimization procedure for three-dimensional anisotropic tetrahedral grids based on metric-driven h-adaptation. The desired anisotropy in the grid is dictated by a metric that defines the size, shape, and orientation of the grid elements throughout the computational domain. Through the use of topological and geometrical operators, the mesh is iteratively adapted until the final mesh minimizes a given objective function. In this work, the objective function measures the distance between the metric of each simplex and a target metric, which can be either user-defined (a-priori) or the result of a-posteriori error analysis. During the adaptation process, one tries to decrease the metric-based objective function until the final mesh is compliant with the target within a given tolerance. However, in regions such as corners and complex face intersections, the compliance condition was found to be very difficult or sometimes impossible to satisfy. In order to address this issue, we propose an optimization process based on an ad-hoc application of the simulated annealing technique, which improves the likelihood of removing poor elements from the grid. Moreover, a local implementation of the simulated annealing is proposed to reduce the computational cost. Many challenging multi-physics and multi-field problems that are unsteady in nature are characterized by moving boundaries and/or interfaces. When the boundary displacements are large, which typically occurs when implicit time marching procedures are used, degenerate elements are easily formed in the grid such that frequent remeshing is required. To deal with this problem, in the second part of this work, we propose a new r-adaptation methodology. The new technique is valid for both simplicial (e.g., triangular, tet) and non-simplicial (e.g., quadrilateral, hex) deforming grids that undergo large imposed displacements at their boundaries. A two- or three-dimensional grid is deformed using a network of linear springs composed of edge springs and a set of virtual springs. The virtual springs are constructed in such a way as to oppose element collapsing. This is accomplished by confining each vertex to its ball through springs that are attached to the vertex and its projection on the ball entities. The resulting linear problem is solved using a preconditioned conjugate gradient method. The new method is compared with the classical spring analogy technique in two- and three-dimensional examples, highlighting the performance improvements achieved by the new method. Meshes are an important part of numerical simulations. Depending on the geometry and flow conditions, the most suitable mesh for each particular problem is different. Meshes are usually generated by either using a suitable software package or solving a PDE. In both cases, engineering intuition plays a significant role in deciding where clusterings should take place. In addition, for unsteady problems, the gradients vary for each time step, which requires frequent remeshing during simulations. Therefore, in order to minimize user intervention and prevent frequent remeshings, we conclude this work by defining a novel mesh adaptation technique that integrates metric based target mesh definitions with the ball-vertex mesh deformation method. In this new approach, the entire mesh is deformed based on either an a-priori or an a-posteriori error estimator. In other words, nodal points are repositioned upon application of a force field in order to comply with the target mesh or to get more accurate solutions. The method has been tested for two-dimensional problems of a-priori metric definitions as well as for oblique shock clusterings.

  8. Evaluation of Class II treatment by cephalometric regional superpositions versus conventional measurements.

    PubMed

    Efstratiadis, Stella; Baumrind, Sheldon; Shofer, Frances; Jacobsson-Hunt, Ulla; Laster, Larry; Ghafari, Joseph

    2005-11-01

    The aims of this study were (1) to evaluate cephalometric changes in subjects with Class II Division 1 malocclusion who were treated with headgear (HG) or Fränkel function regulator (FR) and (2) to compare findings from regional superpositions of cephalometric structures with those from conventional cephalometric measurements. Cephalographs were taken at baseline, after 1 year, and after 2 years of 65 children enrolled in a prospective randomized clinical trial. The spatial location of the landmarks derived from regional superpositions was evaluated in a coordinate system oriented on natural head position. The superpositions included the best anatomic fit of the anterior cranial base, maxillary base, and mandibular structures. Both the HG and the FR were effective in correcting the distoclusion, and they generated enhanced differential growth between the jaws. Differences between cranial and maxillary superpositions regarding mandibular displacement (Point B, pogonion, gnathion, menton) were noted: the HG had a more horizontal vector on maxillary superposition that was also greater (.0001 < P < .05) than the horizontal displacement observed with the FR. This discrepancy appeared to be related to (1) the clockwise (backward) rotation of the palatal and mandibular planes observed with the HG; the palatal plane's rotation, which was transferred through the occlusion to the mandibular plane, was factored out on maxillary superposition; and (2) the interaction between the inclination of the maxillary incisors and the forward movement of the mandible during growth. Findings from superpositions agreed with conventional angular and linear measurements regarding the basic conclusions for the primary effects of HG and FR. However, the results suggest that inferences of mandibular displacement are more reliable from maxillary than cranial superposition when evaluating occlusal changes during treatment.

  9. A voxel-based finite element model for the prediction of bladder deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai Xiangfei; Herk, Marcel van; Hulshof, Maarten C. C. M.

    2012-01-15

    Purpose: A finite element (FE) bladder model was previously developed to predict bladder deformation caused by bladder filling change. However, two factors prevent a wide application of FE models: (1) the labor required to construct a FE model with high quality mesh and (2) long computation time needed to construct the FE model and solve the FE equations. In this work, we address these issues by constructing a low-resolution voxel-based FE bladder model directly from the binary segmentation images and compare the accuracy and computational efficiency of the voxel-based model used to simulate bladder deformation with those of a classicalmore » FE model with a tetrahedral mesh. Methods: For ten healthy volunteers, a series of MRI scans of the pelvic region was recorded at regular intervals of 10 min over 1 h. For this series of scans, the bladder volume gradually increased while rectal volume remained constant. All pelvic structures were defined from a reference image for each volunteer, including bladder wall, small bowel, prostate (male), uterus (female), rectum, pelvic bone, spine, and the rest of the body. Four separate FE models were constructed from these structures: one with a tetrahedral mesh (used in previous study), one with a uniform hexahedral mesh, one with a nonuniform hexahedral mesh, and one with a low-resolution nonuniform hexahedral mesh. Appropriate material properties were assigned to all structures and uniform pressure was applied to the inner bladder wall to simulate bladder deformation from urine inflow. Performance of the hexahedral meshes was evaluated against the performance of the standard tetrahedral mesh by comparing the accuracy of bladder shape prediction and computational efficiency. Results: FE model with a hexahedral mesh can be quickly and automatically constructed. No substantial differences were observed between the simulation results of the tetrahedral mesh and hexahedral meshes (<1% difference in mean dice similarity coefficient to manual contours and <0.02 cm difference in mean standard deviation of residual errors). The average equation solving time (without manual intervention) for the first two types of hexahedral meshes increased to 2.3 h and 2.6 h compared to the 1.1 h needed for the tetrahedral mesh, however, the low-resolution nonuniform hexahedral mesh dramatically decreased the equation solving time to 3 min without reducing accuracy. Conclusions: Voxel-based mesh generation allows fast, automatic, and robust creation of finite element bladder models directly from binary segmentation images without user intervention. Even the low-resolution voxel-based hexahedral mesh yields comparable accuracy in bladder shape prediction and more than 20 times faster in computational speed compared to the tetrahedral mesh. This approach makes it more feasible and accessible to apply FE method to model bladder deformation in adaptive radiotherapy.« less

  10. Automated hexahedral mesh generation from biomedical image data: applications in limb prosthetics.

    PubMed

    Zachariah, S G; Sanders, J E; Turkiyyah, G M

    1996-06-01

    A general method to generate hexahedral meshes for finite element analysis of residual limbs and similar biomedical geometries is presented. The method utilizes skeleton-based subdivision of cross-sectional domains to produce simple subdomains in which structured meshes are easily generated. Application to a below-knee residual limb and external prosthetic socket is described. The residual limb was modeled as consisting of bones, soft tissue, and skin. The prosthetic socket model comprised a socket wall with an inner liner. The geometries of these structures were defined using axial cross-sectional contour data from X-ray computed tomography, optical scanning, and mechanical surface digitization. A tubular surface representation, using B-splines to define the directrix and generator, is shown to be convenient for definition of the structure geometries. Conversion of cross-sectional data to the compact tubular surface representation is direct, and the analytical representation simplifies geometric querying and numerical optimization within the mesh generation algorithms. The element meshes remain geometrically accurate since boundary nodes are constrained to lie on the tubular surfaces. Several element meshes of increasing mesh density were generated for two residual limbs and prosthetic sockets. Convergence testing demonstrated that approximately 19 elements are required along a circumference of the residual limb surface for a simple linear elastic model. A model with the fibula absent compared with the same geometry with the fibula present showed differences suggesting higher distal stresses in the absence of the fibula. Automated hexahedral mesh generation algorithms for sliced data represent an advancement in prosthetic stress analysis since they allow rapid modeling of any given residual limb and optimization of mesh parameters.

  11. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.

    1992-01-01

    Quality assessment procedures are described for two-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate accuracy of an implicit upwind Euler solution algorithm.

  12. Deposition and post-processing techniques for transparent conductive films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christoforo, Mark Greyson; Mehra, Saahil; Salleo, Alberto

    2017-07-04

    In one embodiment, a method is provided for fabrication of a semitransparent conductive mesh. A first solution having conductive nanowires suspended therein and a second solution having nanoparticles suspended therein are sprayed toward a substrate, the spraying forming a mist. The mist is processed, while on the substrate, to provide a semitransparent conductive material in the form of a mesh having the conductive nanowires and nanoparticles. The nanoparticles are configured and arranged to direct light passing through the mesh. Connections between the nanowires provide conductivity through the mesh.

  13. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE PAGES

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    2018-06-07

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  14. Covalent layer-by-layer grafting (LBLG) functionalized superhydrophobic stainless steel mesh for oil/water separation

    NASA Astrophysics Data System (ADS)

    Jiang, Bin; Zhang, Hongjie; Sun, Yongli; Zhang, Luhong; Xu, Lidong; Hao, Li; Yang, Huawei

    2017-06-01

    A superhydrophobic and superoleophilic stainless steel (SS) mesh for oil/water separation has been developed by using a novel, facile and inexpensive covalent layer-by-layer grafting (LBLG) method. Hierarchical micro/nanostructure surface was formed through grafting the (3-aminopropyl) triethoxysilane (SCA), polyethylenimine (PEI) and trimesoyl chloride (TMC) onto the mesh in sequence, accompanied with SiO2 nanoparticles subtly and firmly anchored in multilayers. Superhydrophobic characteristic was realized by self-assembly grafting of hydrophobic groups onto the surface. The as-prepared mesh exhibits excellent superhydrophobicity with a water contact angle of 159°. Moreover, with a low sliding angle of 4°, it shows the "lotus effect" for self-cleaning. As for application evaluation, the as-prepared mesh can be used for large-scale separation of oil/water mixtures with a relatively high separation efficiency after 30 times reuse (99.88% for n-octane/water mixture) and a high intrusion pressure (3.58 kPa). More importantly, the mesh exhibited excellent stability in the case of vibration situation, long-term storage as well as saline corrosion conditions, and the compatible pH range was determined to be 1-13. In summary, this work provides a brand new method of modifying SS mesh in a covalent LBLG way, and makes it possible to introduce various functionalized groups onto the surface.

  15. Direction-aware Slope Limiter for 3D Cubic Grids with Adaptive Mesh Refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velechovsky, Jan; Francois, Marianne M.; Masser, Thomas

    In the context of finite volume methods for hyperbolic systems of conservation laws, slope limiters are an effective way to suppress creation of unphysical local extrema and/or oscillations near discontinuities. We investigate properties of these limiters as applied to piecewise linear reconstructions of conservative fluid quantities in three-dimensional simulations. In particular, we are interested in linear reconstructions on Cartesian adaptively refined meshes, where a reconstructed fluid quantity at a face center depends on more than a single gradient component of the quantity. We design a new slope limiter, which combines the robustness of a minmod limiter with the accuracy ofmore » a van Leer limiter. The limiter is called Direction-Aware Limiter (DAL), because the combination is based on a principal flow direction. In particular, DAL is useful in situations where the Barth–Jespersen limiter for general meshes fails to maintain global linear functions, such as on cubic computational meshes with stencils including only faceneighboring cells. Here, we verify the new slope limiter on a suite of standard hydrodynamic test problems on Cartesian adaptively refined meshes. Lastly, we demonstrate reduced mesh imprinting; for radially symmetric problems such as the Sedov blast wave or the Noh implosion test cases, the results with DAL show better preservation of radial symmetry compared to the other standard methods on Cartesian meshes.« less

  16. Histological analysis of the repair of dural lesions with silicone mesh in rats subjected to experimental lesions

    PubMed Central

    da Rosa, Fernando William Figueiredo; Pohl, Pedro Henrique Isoldi; Mader, Ana Maria Amaral Antônio; de Paiva, Carla Peluso; dos Santos, Aline Amaro; Bianco, Bianca; Rodrigues, Luciano Miller Reis

    2015-01-01

    ABSTRACT Objective To evaluate inflammatory reaction, fibrosis and neovascularization in dural repairs in Wistar rats using four techniques: simple suture, bovine collagen membrane, silicon mesh and silicon mesh with suture. Methods Thirty Wistar rats were randomized in five groups: the first was the control group, submitted to dural tear only. The others underwent durotomy and simple suture, bovine collagen membrane, silicon mesh and silicon mesh with suture. Animals were euthanized and the spine was submitted to histological evaluation with a score system (ranging from zero to 3) for inflammation, neovascularization and fibrosis. Results Fibrosis was significantly different between simple suture and silicon mesh (p=0.005) and between simple suture and mesh with suture (p=0.015), showing that fibrosis is more intense when a foreign body is used in the repair. Bovine membrane was significantly different from mesh plus suture (p=0.011) regarding vascularization. Inflammation was significantly different between simple suture and bovine collagen membrane. Conclusion Silicon mesh, compared to other commercial products available, is a possible alternative for dural repair. More studies are necessary to confirm these findings. PMID:26761555

  17. Comparison and combination of several MeSH indexing approaches

    PubMed Central

    Yepes, Antonio Jose Jimeno; Mork, James G.; Demner-Fushman, Dina; Aronson, Alan R.

    2013-01-01

    MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI’s performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings. PMID:24551371

  18. Comparison and combination of several MeSH indexing approaches.

    PubMed

    Yepes, Antonio Jose Jimeno; Mork, James G; Demner-Fushman, Dina; Aronson, Alan R

    2013-01-01

    MeSH indexing of MEDLINE is becoming a more difficult task for the group of highly qualified indexing staff at the US National Library of Medicine, due to the large yearly growth of MEDLINE and the increasing size of MeSH. Since 2002, this task has been assisted by the Medical Text Indexer or MTI program. We extend previous machine learning analysis by adding a more diverse set of MeSH headings targeting examples where MTI has been shown to perform poorly. Machine learning algorithms exceed MTI's performance on MeSH headings that are used very frequently and headings for which the indexing frequency is very low. We find that when we combine the MTI suggestions and the prediction of the learning algorithms, the performance improves compared to any single method for most of the evaluated MeSH headings.

  19. Quantum superposition at the half-metre scale.

    PubMed

    Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A

    2015-12-24

    The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.

  20. Towards a new multiscale air quality transport model using the fully unstructured anisotropic adaptive mesh technology of Fluidity (version 4.1.9)

    NASA Astrophysics Data System (ADS)

    Zheng, J.; Zhu, J.; Wang, Z.; Fang, F.; Pain, C. C.; Xiang, J.

    2015-10-01

    An integrated method of advanced anisotropic hr-adaptive mesh and discretization numerical techniques has been, for first time, applied to modelling of multiscale advection-diffusion problems, which is based on a discontinuous Galerkin/control volume discretization on unstructured meshes. Over existing air quality models typically based on static-structured grids using a locally nesting technique, the advantage of the anisotropic hr-adaptive model has the ability to adapt the mesh according to the evolving pollutant distribution and flow features. That is, the mesh resolution can be adjusted dynamically to simulate the pollutant transport process accurately and effectively. To illustrate the capability of the anisotropic adaptive unstructured mesh model, three benchmark numerical experiments have been set up for two-dimensional (2-D) advection phenomena. Comparisons have been made between the results obtained using uniform resolution meshes and anisotropic adaptive resolution meshes. Performance achieved in 3-D simulation of power plant plumes indicates that this new adaptive multiscale model has the potential to provide accurate air quality modelling solutions effectively.

  1. Highly flexible transparent electrodes based on mesh-patterned rigid indium tin oxide.

    PubMed

    Sakamoto, Kosuke; Kuwae, Hiroyuki; Kobayashi, Naofumi; Nobori, Atsuki; Shoji, Shuichi; Mizuno, Jun

    2018-02-12

    We developed highly bendable transparent indium tin oxide (ITO) electrodes with a mesh pattern for use in flexible electronic devices. The mesh patterns lowered tensile stress and hindered propagation of cracks. Simulations using the finite element method confirmed that the mesh patterns decreased tensile stress by over 10% because of the escaped strain to the flexible film when the electrodes were bent. The proposed patterned ITO electrodes were simply fabricated by photolithography and wet etching. The resistance increase ratio of a mesh-patterned ITO electrode after bending 1000 times was at least two orders of magnitude lower than that of a planar ITO electrode. In addition, crack propagation was stopped by the mesh pattern of the patterned ITO electrode. A mesh-patterned ITO electrode was used in a liquid-based organic light-emitting diode (OLED). The OLED displayed the same current density-voltage-luminance (J-V-L) curves before and after bending 100 times. These results indicate that the developed mesh-patterned ITO electrodes are attractive for use in flexible electronic devices.

  2. A Finite Element Method for Simulation of Compressible Cavitating Flows

    NASA Astrophysics Data System (ADS)

    Shams, Ehsan; Yang, Fan; Zhang, Yu; Sahni, Onkar; Shephard, Mark; Oberai, Assad

    2016-11-01

    This work focuses on a novel approach for finite element simulations of multi-phase flows which involve evolving interface with phase change. Modeling problems, such as cavitation, requires addressing multiple challenges, including compressibility of the vapor phase, interface physics caused by mass, momentum and energy fluxes. We have developed a mathematically consistent and robust computational approach to address these problems. We use stabilized finite element methods on unstructured meshes to solve for the compressible Navier-Stokes equations. Arbitrary Lagrangian-Eulerian formulation is used to handle the interface motions. Our method uses a mesh adaptation strategy to preserve the quality of the volumetric mesh, while the interface mesh moves along with the interface. The interface jump conditions are accurately represented using a discontinuous Galerkin method on the conservation laws. Condensation and evaporation rates at the interface are thermodynamically modeled to determine the interface velocity. We will present initial results on bubble cavitation the behavior of an attached cavitation zone in a separated boundary layer. We acknowledge the support from Army Research Office (ARO) under ARO Grant W911NF-14-1-0301.

  3. An overlapped grid method for multigrid, finite volume/difference flow solvers: MaGGiE

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Lessard, Victor R.

    1990-01-01

    The objective is to develop a domain decomposition method via overlapping/embedding the component grids, which is to be used by upwind, multi-grid, finite volume solution algorithms. A computer code, given the name MaGGiE (Multi-Geometry Grid Embedder) is developed to meet this objective. MaGGiE takes independently generated component grids as input, and automatically constructs the composite mesh and interpolation data, which can be used by the finite volume solution methods with or without multigrid convergence acceleration. Six demonstrative examples showing various aspects of the overlap technique are presented and discussed. These cases are used for developing the procedure for overlapping grids of different topologies, and to evaluate the grid connection and interpolation data for finite volume calculations on a composite mesh. Time fluxes are transferred between mesh interfaces using a trilinear interpolation procedure. Conservation losses are minimal at the interfaces using this method. The multi-grid solution algorithm, using the coaser grid connections, improves the convergence time history as compared to the solution on composite mesh without multi-gridding.

  4. An electrostatic Particle-In-Cell code on multi-block structured meshes

    NASA Astrophysics Data System (ADS)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; Vernon, Louis J.; Moulton, J. David

    2017-12-01

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. Despite the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where an arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma-material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. Compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.

  5. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE PAGES

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca; ...

    2017-09-14

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  6. An electrostatic Particle-In-Cell code on multi-block structured meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meierbachtol, Collin S.; Svyatskiy, Daniil; Delzanno, Gian Luca

    We present an electrostatic Particle-In-Cell (PIC) code on multi-block, locally structured, curvilinear meshes called Curvilinear PIC (CPIC). Multi-block meshes are essential to capture complex geometries accurately and with good mesh quality, something that would not be possible with single-block structured meshes that are often used in PIC and for which CPIC was initially developed. In spite of the structured nature of the individual blocks, multi-block meshes resemble unstructured meshes in a global sense and introduce several new challenges, such as the presence of discontinuities in the mesh properties and coordinate orientation changes across adjacent blocks, and polyjunction points where anmore » arbitrary number of blocks meet. In CPIC, these challenges have been met by an approach that features: (1) a curvilinear formulation of the PIC method: each mesh block is mapped from the physical space, where the mesh is curvilinear and arbitrarily distorted, to the logical space, where the mesh is uniform and Cartesian on the unit cube; (2) a mimetic discretization of Poisson's equation suitable for multi-block meshes; and (3) a hybrid (logical-space position/physical-space velocity), asynchronous particle mover that mitigates the performance degradation created by the necessity to track particles as they move across blocks. The numerical accuracy of CPIC was verified using two standard plasma–material interaction tests, which demonstrate good agreement with the corresponding analytic solutions. And compared to PIC codes on unstructured meshes, which have also been used for their flexibility in handling complex geometries but whose performance suffers from issues associated with data locality and indirect data access patterns, PIC codes on multi-block structured meshes may offer the best compromise for capturing complex geometries while also maintaining solution accuracy and computational efficiency.« less

  7. The effect of different propolis harvest methods on its lead contents determined by ET AAS and UV-visS.

    PubMed

    Sales, A; Alvarez, A; Areal, M Rodriguez; Maldonado, L; Marchisio, P; Rodríguez, M; Bedascarrasbure, E

    2006-10-11

    Argentinean propolis is exported to different countries, specially Japan. The market demands propolis quality control according to international standards. The analytical determination of some metals, as lead in food, is very important for their high toxicity even in low concentrations and because of their harmful effects on health. Flavonoids, the main bioactive compounds of propolis, tend to chelate metals as lead, which becomes one of the main polluting agents of propolis. The lead found in propolis may come from the atmosphere or it may be incorporated in the harvest, extraction and processing methods. The aim of this work is to evaluate lead level on Argentinean propolis determined by electrothermal atomic absorption spectrometry (ET AAS) and UV-vis spectrophotometry (UV-visS) methods, as well as the effect of harvest methods on those contents. A randomized test with three different treatments of collection was made to evaluate the effect of harvest methods. These procedures were: separating wedges (traditional), netting plastic meshes and stamping out plastic meshes. By means of the analysis of variance technique for multiple comparisons (ANOVA) it was possible to conclude that there are significant differences between scraped and mesh methods (stamped out and mosquito netting meshes). The results obtained in the present test would allow us to conclude that mesh methods are more advisable than scraped ones in order to obtain innocuous and safe propolis with minor lead contents. A statistical comparison of lead determination by both, ET AAS and UV-visS methods, demonstrated that there is not a significant difference in the results achieved with the two analytical techniques employed.

  8. Preclinical evaluation of the effect of the combined use of the Ethicon Securestrap® Open Absorbable Strap Fixation Device and Ethicon Physiomesh™ Open Flexible Composite Mesh Device on surgeon stress during ventral hernia repair

    PubMed Central

    Sutton, Nadia; MacDonald, Melinda H; Lombard, John; Ilie, Bodgan; Hinoul, Piet; Granger, Douglas A

    2018-01-01

    Aim To evaluate whether performing ventral hernia repairs using the Ethicon Physiomesh™ Open Flexible Composite Mesh Device in conjunction with the Ethicon Securestrap® Open Absorbable Strap Fixation Device reduces surgical time and surgeon stress levels, compared with traditional surgical repair methods. Methods To repair a simulated ventral incisional hernia, two surgeries were performed by eight experienced surgeons using a live porcine model. One procedure involved traditional suture methods and a flat mesh, and the other procedure involved a mechanical fixation device and a skirted flexible composite mesh. A Surgery Task Load Index questionnaire was administered before and after the procedure to establish the surgeons’ perceived stress levels, and saliva samples were collected before, during, and after the surgical procedures to assess the biologically expressed stress (cortisol and salivary alpha amylase) levels. Results For mechanical fixation using the Ethicon Physiomesh Open Flexible Composite Mesh Device in conjunction with the Ethicon Securestrap Open Absorbable Strap Fixation Device, surgeons reported a 46.2% reduction in perceived workload stress. There was also a lower physiological reactivity to the intraoperative experience and the total surgical procedure time was reduced by 60.3%. Conclusions This study provides preliminary findings suggesting that the combined use of a mechanical fixation device and a skirted flexible composite mesh in an open intraperitoneal onlay mesh repair has the potential to reduce surgeon stress. Additional studies are needed to determine whether a reduction in stress is observed in a clinical setting and, if so, confirm that this results in improved clinical outcomes. PMID:29296101

  9. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    NASA Astrophysics Data System (ADS)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  10. Establishing mesh topology in multi-material cells: enabling technology for robust and accurate multi-material simulations

    DOE PAGES

    Kikinzon, Evgeny; Shashkov, Mikhail Jurievich; Garimella, Rao Veerabhadra

    2018-05-29

    Real world problems are typically multi-material, combining materials such as gases, liquids and solids that have very different properties. The material interfaces may be fixed in time or can be a part of the solution, as in fluid-structure interactions or air-water dynamics, and therefore move and change shape. In such problems the computational mesh may be non-conformal to interfaces due to complexity of these interfaces, presence of small fractions of materials, or because the mesh does not move with the flow, as in the arbitrary Lagrangian–Eulerian (ALE) methods. In order to solve problems of interest on such meshes, interface reconstructionmore » methods are usually used to recover an approximation of material regions within the cells. For a cell intersecting multiple material regions, these approximations of contained subregions can be considered as single-material subcells in a local mesh that we call a minimesh. In this paper, we discuss some of the requirements that discretization methods have on topological information in the resulting hierarchical meshes and present an approach that allows incorporating the buildup of sufficiently detailed topology into the nested dissections based PLIC-type reconstruction algorithms (e.g. Volume-of-Fluid, Moment-of-Fluid) in an efficient and robust manner. Specifically, we describe the X-MOF interface reconstruction algorithm in 2D, which extends the Moment-Of-Fluid (MOF) method to include the topology of minimeshes created inside of multi-material cells and parent-child relations between corresponding mesh entities on different hierarchy levels. X-MOF retains the property of being local to a cell and not requiring external communication, which makes it suitable for massively parallel applications. Here, we demonstrate some scaling results for the X-MOF implementation in Tangram, a modern interface reconstruction framework for exascale computing.« less

  11. Establishing mesh topology in multi-material cells: enabling technology for robust and accurate multi-material simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kikinzon, Evgeny; Shashkov, Mikhail Jurievich; Garimella, Rao Veerabhadra

    Real world problems are typically multi-material, combining materials such as gases, liquids and solids that have very different properties. The material interfaces may be fixed in time or can be a part of the solution, as in fluid-structure interactions or air-water dynamics, and therefore move and change shape. In such problems the computational mesh may be non-conformal to interfaces due to complexity of these interfaces, presence of small fractions of materials, or because the mesh does not move with the flow, as in the arbitrary Lagrangian–Eulerian (ALE) methods. In order to solve problems of interest on such meshes, interface reconstructionmore » methods are usually used to recover an approximation of material regions within the cells. For a cell intersecting multiple material regions, these approximations of contained subregions can be considered as single-material subcells in a local mesh that we call a minimesh. In this paper, we discuss some of the requirements that discretization methods have on topological information in the resulting hierarchical meshes and present an approach that allows incorporating the buildup of sufficiently detailed topology into the nested dissections based PLIC-type reconstruction algorithms (e.g. Volume-of-Fluid, Moment-of-Fluid) in an efficient and robust manner. Specifically, we describe the X-MOF interface reconstruction algorithm in 2D, which extends the Moment-Of-Fluid (MOF) method to include the topology of minimeshes created inside of multi-material cells and parent-child relations between corresponding mesh entities on different hierarchy levels. X-MOF retains the property of being local to a cell and not requiring external communication, which makes it suitable for massively parallel applications. Here, we demonstrate some scaling results for the X-MOF implementation in Tangram, a modern interface reconstruction framework for exascale computing.« less

  12. On the implementation of an accurate and efficient solver for convection-diffusion equations

    NASA Astrophysics Data System (ADS)

    Wu, Chin-Tien

    In this dissertation, we examine several different aspects of computing the numerical solution of the convection-diffusion equation. The solution of this equation often exhibits sharp gradients due to Dirichlet outflow boundaries or discontinuities in boundary conditions. Because of the singular-perturbed nature of the equation, numerical solutions often have severe oscillations when grid sizes are not small enough to resolve sharp gradients. To overcome such difficulties, the streamline diffusion discretization method can be used to obtain an accurate approximate solution in regions where the solution is smooth. To increase accuracy of the solution in the regions containing layers, adaptive mesh refinement and mesh movement based on a posteriori error estimations can be employed. An error-adapted mesh refinement strategy based on a posteriori error estimations is also proposed to resolve layers. For solving the sparse linear systems that arise from discretization, goemetric multigrid (MG) and algebraic multigrid (AMG) are compared. In addition, both methods are also used as preconditioners for Krylov subspace methods. We derive some convergence results for MG with line Gauss-Seidel smoothers and bilinear interpolation. Finally, while considering adaptive mesh refinement as an integral part of the solution process, it is natural to set a stopping tolerance for the iterative linear solvers on each mesh stage so that the difference between the approximate solution obtained from iterative methods and the finite element solution is bounded by an a posteriori error bound. Here, we present two stopping criteria. The first is based on a residual-type a posteriori error estimator developed by Verfurth. The second is based on an a posteriori error estimator, using local solutions, developed by Kay and Silvester. Our numerical results show the refined mesh obtained from the iterative solution which satisfies the second criteria is similar to the refined mesh obtained from the finite element solution.

  13. Monte Carlo charged-particle tracking and energy deposition on a Lagrangian mesh.

    PubMed

    Yuan, J; Moses, G A; McKenty, P W

    2005-10-01

    A Monte Carlo algorithm for alpha particle tracking and energy deposition on a cylindrical computational mesh in a Lagrangian hydrodynamics code used for inertial confinement fusion (ICF) simulations is presented. The straight line approximation is used to follow propagation of "Monte Carlo particles" which represent collections of alpha particles generated from thermonuclear deuterium-tritium (DT) reactions. Energy deposition in the plasma is modeled by the continuous slowing down approximation. The scheme addresses various aspects arising in the coupling of Monte Carlo tracking with Lagrangian hydrodynamics; such as non-orthogonal severely distorted mesh cells, particle relocation on the moving mesh and particle relocation after rezoning. A comparison with the flux-limited multi-group diffusion transport method is presented for a polar direct drive target design for the National Ignition Facility. Simulations show the Monte Carlo transport method predicts about earlier ignition than predicted by the diffusion method, and generates higher hot spot temperature. Nearly linear speed-up is achieved for multi-processor parallel simulations.

  14. An adaptive simplex cut-cell method for high-order discontinuous Galerkin discretizations of elliptic interface problems and conjugate heat transfer problems

    NASA Astrophysics Data System (ADS)

    Sun, Huafei; Darmofal, David L.

    2014-12-01

    In this paper we propose a new high-order solution framework for interface problems on non-interface-conforming meshes. The framework consists of a discontinuous Galerkin (DG) discretization, a simplex cut-cell technique, and an output-based adaptive scheme. We first present a DG discretization with a dual-consistent output evaluation for elliptic interface problems on interface-conforming meshes, and then extend the method to handle multi-physics interface problems, in particular conjugate heat transfer (CHT) problems. The method is then applied to non-interface-conforming meshes using a cut-cell technique, where the interface definition is completely separate from the mesh generation process. No assumption is made on the interface shape (other than Lipschitz continuity). We then equip our strategy with an output-based adaptive scheme for an accurate output prediction. Through numerical examples, we demonstrate high-order convergence for elliptic interface problems and CHT problems with both smooth and non-smooth interface shapes.

  15. Data Assimilation Methods on a Non-conservative Adaptive Mesh

    NASA Astrophysics Data System (ADS)

    Guider, Colin Thomas; Rabatel, Matthias; Carrassi, Alberto; Jones, Christopher K. R. T.

    2017-04-01

    Adaptive mesh methods are used to model a wide variety of physical phenomena. Some of these models, in particular those of sea ice movement, are particularly interesting in that they use a remeshing process to remove and insert mesh points at various points in their evolution. This presents a challenge in developing compatible data assimilation schemes, as the dimension of the state space we wish to estimate can change over time when these remeshings occur. In this work, we first describe a remeshing scheme for an adaptive mesh in one dimension. We then develop advanced data assimilation methods that are appropriate for such a moving and remeshed grid. We hope to extend these techniques to two-dimensional models, like the Lagrangian sea ice model neXtSIM te{ns}. \\bibitem{ns} P. Rampal, S. Bouillon, E. Ólason, and M. Morlighem. ne{X}t{SIM}: a new {L}agrangian sea ice model. {The Cryosphere}, 10 (3): 1055-1073, 2016.

  16. An efficient Adaptive Mesh Refinement (AMR) algorithm for the Discontinuous Galerkin method: Applications for the computation of compressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky

    2018-06-01

    We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian-Lagrangian approach. This enables us to refine the computational mesh in the vicinity of the droplet parcels and accurately resolve the coupling between the two phases.

  17. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    PubMed Central

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2014-01-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise. PMID:23588373

  18. Simulation of geothermal water extraction in heterogeneous reservoirs using dynamic unstructured mesh optimisation

    NASA Astrophysics Data System (ADS)

    Salinas, P.; Pavlidis, D.; Jacquemyn, C.; Lei, Q.; Xie, Z.; Pain, C.; Jackson, M.

    2017-12-01

    It is well known that the pressure gradient into a production well increases with decreasing distance to the well. To properly capture the local pressure drawdown into the well a high grid or mesh resolution is required; moreover, the location of the well must be captured accurately. In conventional simulation models, the user must interact with the model to modify grid resolution around wells of interest, and the well location is approximated on a grid defined early in the modelling process.We report a new approach for improved simulation of near wellbore flow in reservoir scale models through the use of dynamic mesh optimisation and the recently presented double control volume finite element method. Time is discretized using an adaptive, implicit approach. Heterogeneous geologic features are represented as volumes bounded by surfaces. Within these volumes, termed geologic domains, the material properties are constant. Up-, cross- or down-scaling of material properties during dynamic mesh optimization is not required, as the properties are uniform within each geologic domain. A given model typically contains numerous such geologic domains. Wells are implicitly coupled with the domain, and the fluid flows is modelled inside the wells. The method is novel for two reasons. First, a fully unstructured tetrahedral mesh is used to discretize space, and the spatial location of the well is specified via a line vector, ensuring its location even if the mesh is modified during the simulation. The well location is therefore accurately captured, the approach allows complex well trajectories and wells with many laterals to be modelled. Second, computational efficiency is increased by use of dynamic mesh optimization, in which an unstructured mesh adapts in space and time to key solution fields (preserving the geometry of the geologic domains), such as pressure, velocity or temperature, this also increases the quality of the solutions by placing higher resolution where required to reduce an error metric based on the Hessian of the field. This allows the local pressure drawdown to be captured without user¬ driven modification of the mesh. We demonstrate that the method has wide application in reservoir ¬scale models of geothermal fields, and regional models of groundwater resources.

  19. Practical implementation of tetrahedral mesh reconstruction in emission tomography

    NASA Astrophysics Data System (ADS)

    Boutchko, R.; Sitek, A.; Gullberg, G. T.

    2013-05-01

    This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.

  20. Interface projection techniques for fluid-structure interaction modeling with moving-mesh methods

    NASA Astrophysics Data System (ADS)

    Tezduyar, Tayfun E.; Sathe, Sunil; Pausewang, Jason; Schwaab, Matthew; Christopher, Jason; Crabtree, Jason

    2008-12-01

    The stabilized space-time fluid-structure interaction (SSTFSI) technique developed by the Team for Advanced Flow Simulation and Modeling (T★AFSM) was applied to a number of 3D examples, including arterial fluid mechanics and parachute aerodynamics. Here we focus on the interface projection techniques that were developed as supplementary methods targeting the computational challenges associated with the geometric complexities of the fluid-structure interface. Although these supplementary techniques were developed in conjunction with the SSTFSI method and in the context of air-fabric interactions, they can also be used in conjunction with other moving-mesh methods, such as the Arbitrary Lagrangian-Eulerian (ALE) method, and in the context of other classes of FSI applications. The supplementary techniques currently consist of using split nodal values for pressure at the edges of the fabric and incompatible meshes at the air-fabric interfaces, the FSI Geometric Smoothing Technique (FSI-GST), and the Homogenized Modeling of Geometric Porosity (HMGP). Using split nodal values for pressure at the edges and incompatible meshes at the interfaces stabilizes the structural response at the edges of the membrane used in modeling the fabric. With the FSI-GST, the fluid mechanics mesh is sheltered from the consequences of the geometric complexity of the structure. With the HMGP, we bypass the intractable complexities of the geometric porosity by approximating it with an “equivalent”, locally-varying fabric porosity. As test cases demonstrating how the interface projection techniques work, we compute the air-fabric interactions of windsocks, sails and ringsail parachutes.

  1. An Approach to Quad Meshing Based On Cross Valued Maps and the Ginzburg-Landau Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viertel, Ryan; Osting, Braxton

    2017-08-01

    A generalization of vector fields, referred to as N-direction fields or cross fields when N=4, has been recently introduced and studied for geometry processing, with applications in quadrilateral (quad) meshing, texture mapping, and parameterization. We make the observation that cross field design for two-dimensional quad meshing is related to the well-known Ginzburg-Landau problem from mathematical physics. This identification yields a variety of theoretical tools for efficiently computing boundary-aligned quad meshes, with provable guarantees on the resulting mesh, for example, the number of mesh defects and bounds on the defect locations. The procedure for generating the quad mesh is to (i)more » find a complex-valued "representation" field that minimizes the Dirichlet energy subject to a boundary constraint, (ii) convert the representation field into a boundary-aligned, smooth cross field, (iii) use separatrices of the cross field to partition the domain into four sided regions, and (iv) mesh each of these four-sided regions using standard techniques. Under certain assumptions on the geometry of the domain, we prove that this procedure can be used to produce a cross field whose separatrices partition the domain into four sided regions. To solve the energy minimization problem for the representation field, we use an extension of the Merriman-Bence-Osher (MBO) threshold dynamics method, originally conceived as an algorithm to simulate motion by mean curvature, to minimize the Ginzburg-Landau energy for the optimal representation field. Lastly, we demonstrate the method on a variety of test domains.« less

  2. Stress adapted embroidered meshes with a graded pattern design for abdominal wall hernia repair

    NASA Astrophysics Data System (ADS)

    Hahn, J.; Bittrich, L.; Breier, A.; Spickenheuer, A.

    2017-10-01

    Abdominal wall hernias are one of the most relevant injuries of the digestive system with 25 million patients in 2013. Surgery is recommended primarily using allogenic non-absorbable wrap-knitted meshes. These meshes have in common that their stress-strain behaviour is not adapted to the anisotropic behaviour of native abdominal wall tissue. The ideal mesh should possess an adequate mechanical behaviour and a suitable porosity at the same time. An alternative fabrication method to wrap-knitting is the embroidery technology with a high flexibility in pattern design and adaption of mechanical properties. In this study, a pattern generator was created for pattern designs consisting of a base and a reinforcement pattern. The embroidered mesh structures demonstrated different structural and mechanical characteristics. Additionally, the investigation of the mechanical properties exhibited an anisotropic mechanical behaviour for the embroidered meshes. As a result, the investigated pattern generator and the embroidery technology allow the production of stress adapted mesh structures that are a promising approach for hernia reconstruction.

  3. An immersed-shell method for modelling fluid–structure interactions

    PubMed Central

    Viré, A.; Xiang, J.; Pain, C. C.

    2015-01-01

    The paper presents a novel method for numerically modelling fluid–structure interactions. The method consists of solving the fluid-dynamics equations on an extended domain, where the computational mesh covers both fluid and solid structures. The fluid and solid velocities are relaxed to one another through a penalty force. The latter acts on a thin shell surrounding the solid structures. Additionally, the shell is represented on the extended domain by a non-zero shell-concentration field, which is obtained by conservatively mapping the shell mesh onto the extended mesh. The paper outlines the theory underpinning this novel method, referred to as the immersed-shell approach. It also shows how the coupling between a fluid- and a structural-dynamics solver is achieved. At this stage, results are shown for cases of fundamental interest. PMID:25583857

  4. Lubrication and cooling for high speed gears

    NASA Technical Reports Server (NTRS)

    Townsend, D. P.

    1985-01-01

    The problems and failures occurring with the operation of high speed gears are discussed. The gearing losses associated with high speed gearing such as tooth mesh friction, bearing friction, churning, and windage are discussed with various ways shown to help reduce these losses and thereby improve efficiency. Several different methods of oil jet lubrication for high speed gearing are given such as into mesh, out of mesh, and radial jet lubrication. The experiments and analytical results for the various methods of oil jet lubrication are shown with the strengths and weaknesses of each method discussed. The analytical and experimental results of gear lubrication and cooling at various test conditions are presented. These results show the very definite need of improved methods of gear cooling at high speed and high load conditions.

  5. Fictitious domain method for fully resolved reacting gas-solid flow simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Longhui; Liu, Kai; You, Changfu

    2015-10-01

    Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.

  6. Dynamic Rupture Modeling in Three Dimensions on Unstructured Meshes Using a Discontinuous Galerkin Method

    NASA Astrophysics Data System (ADS)

    Pelties, C.; Käser, M.

    2010-12-01

    We will present recent developments concerning the extensions of the ADER-DG method to solve three dimensional dynamic rupture problems on unstructured tetrahedral meshes. The simulation of earthquake rupture dynamics and seismic wave propagation using a discontinuous Galerkin (DG) method in 2D was recently presented by J. de la Puente et al. (2009). A considerable feature of this study regarding spontaneous rupture problems was the combination of the DG scheme and a time integration method using Arbitrarily high-order DERivatives (ADER) to provide high accuracy in space and time with the discretization on unstructured meshes. In the resulting discrete velocity-stress formulation of the elastic wave equations variables are naturally discontinuous at the interfaces between elements. The so-called Riemann problem can then be solved to obtain well defined values of the variables at the discontinuity itself. This is in particular valid for the fault at which a certain friction law has to be evaluated. Hence, the fault’s geometry is honored by the computational mesh. This way, complex fault planes can be modeled adequately with small elements while fast mesh coarsening is possible with increasing distance from the fault. Due to the strict locality of the scheme using only direct neighbor communication, excellent parallel behavior can be observed. A further advantage of the scheme is that it avoids spurious high-frequency contributions in the slip rate spectra and therefore does not require artificial Kelvin-Voigt damping or filtering of synthetic seismograms. In order to test the accuracy of the ADER-DG method the Southern California Earthquake Center (SCEC) benchmark for spontaneous rupture simulations was employed. Reference: J. de la Puente, J.-P. Ampuero, and M. Käser (2009), Dynamic rupture modeling on unstructured meshes using a discontinuous Galerkin method, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 114, B10302, doi:10.1029/2008JB006271

  7. Design and simulation of a superposition compound eye system based on hybrid diffractive-refractive lenses.

    PubMed

    Zhang, Shuqing; Zhou, Luyang; Xue, Changxi; Wang, Lei

    2017-09-10

    Compound eyes offer a promising field of miniaturized imaging systems. In one application of a compound eye, superposition of compound eye systems forms a composite image by superposing the images produced by different channels. The geometric configuration of superposition compound eye systems is achieved by three micro-lens arrays with different pitches and focal lengths. High resolution is indispensable for the practicability of superposition compound eye systems. In this paper, hybrid diffractive-refractive lenses are introduced into the design of a compound eye system for this purpose. With the help of ZEMAX, two superposition compound eye systems with and without hybrid diffractive-refractive lenses were separately designed. Then, we demonstrate the effectiveness of using a hybrid diffractive-refractive lens to improve the image quality.

  8. A computational method for sharp interface advection.

    PubMed

    Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje

    2016-11-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.

  9. A new parallelization scheme for adaptive mesh refinement

    DOE PAGES

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.; ...

    2016-05-06

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  10. A new parallelization scheme for adaptive mesh refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loffler, Frank; Cao, Zhoujian; Brandt, Steven R.

    Here, we present a new method for parallelization of adaptive mesh refinement called Concurrent Structured Adaptive Mesh Refinement (CSAMR). This new method offers the lower computational cost (i.e. wall time x processor count) of subcycling in time, but with the runtime performance (i.e. smaller wall time) of evolving all levels at once using the time step of the finest level (which does more work than subcycling but has less parallelism). We demonstrate our algorithm's effectiveness using an adaptive mesh refinement code, AMSS-NCKU, and show performance on Blue Waters and other high performance clusters. For the class of problem considered inmore » this paper, our algorithm achieves a speedup of 1.7-1.9 when the processor count for a given AMR run is doubled, consistent with our theoretical predictions.« less

  11. On Multiscale Modeling: Preserving Energy Dissipation Across the Scales with Consistent Handshaking Methods

    NASA Technical Reports Server (NTRS)

    Pineda, Evan J.; Bednarcyk, Brett A.; Arnold, Steven M.; Waas, Anthony M.

    2013-01-01

    A mesh objective crack band model was implemented within the generalized method of cells micromechanics theory. This model was linked to a macroscale finite element model to predict post-peak strain softening in composite materials. Although a mesh objective theory was implemented at the microscale, it does not preclude pathological mesh dependence at the macroscale. To ensure mesh objectivity at both scales, the energy density and the energy release rate must be preserved identically across the two scales. This requires a consistent characteristic length or localization limiter. The effects of scaling (or not scaling) the dimensions of the microscale repeating unit cell (RUC), according to the macroscale element size, in a multiscale analysis was investigated using two examples. Additionally, the ramifications of the macroscale element shape, compared to the RUC, was studied.

  12. A fully consistent and conservative vertically adaptive coordinate system for SLIM 3D v0.4 with an application to the thermocline oscillations of Lake Tanganyika

    NASA Astrophysics Data System (ADS)

    Delandmeter, Philippe; Lambrechts, Jonathan; Legat, Vincent; Vallaeys, Valentin; Naithani, Jaya; Thiery, Wim; Remacle, Jean-François; Deleersnijder, Eric

    2018-03-01

    The discontinuous Galerkin (DG) finite element method is well suited for the modelling, with a relatively small number of elements, of three-dimensional flows exhibiting strong velocity or density gradients. Its performance can be highly enhanced by having recourse to r-adaptivity. Here, a vertical adaptive mesh method is developed for DG finite elements. This method, originally designed for finite difference schemes, is based on the vertical diffusion of the mesh nodes, with the diffusivity controlled by the density jumps at the mesh element interfaces. The mesh vertical movement is determined by means of a conservative arbitrary Lagrangian-Eulerian (ALE) formulation. Though conservativity is naturally achieved, tracer consistency is obtained by a suitable construction of the mesh vertical velocity field, which is defined in such a way that it is fully compatible with the tracer and continuity equations at a discrete level. The vertically adaptive mesh approach is implemented in the three-dimensional version of the geophysical and environmental flow Second-generation Louvain-la-Neuve Ice-ocean Model (SLIM 3D; www.climate.be/slim). Idealised benchmarks, aimed at simulating the oscillations of a sharp thermocline, are dealt with. Then, the relevance of the vertical adaptivity technique is assessed by simulating thermocline oscillations of Lake Tanganyika. The results are compared to measured vertical profiles of temperature, showing similar stratification and outcropping events.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slattery, Stuart R.

    In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less

  14. Pattern Classifications Using Grover's and Ventura's Algorithms in a Two-qubits System

    NASA Astrophysics Data System (ADS)

    Singh, Manu Pratap; Radhey, Kishori; Rajput, B. S.

    2018-03-01

    Carrying out the classification of patterns in a two-qubit system by separately using Grover's and Ventura's algorithms on different possible superposition, it has been shown that the exclusion superposition and the phase-invariance superposition are the most suitable search states obtained from two-pattern start-states and one-pattern start-states, respectively, for the simultaneous classifications of patterns. The higher effectiveness of Grover's algorithm for large search states has been verified but the higher effectiveness of Ventura's algorithm for smaller data base has been contradicted in two-qubit systems and it has been demonstrated that the unknown patterns (not present in the concerned data-base) are classified more efficiently than the known ones (present in the data-base) in both the algorithms. It has also been demonstrated that different states of Singh-Rajput MES obtained from the corresponding self-single- pattern start states are the most suitable search states for the classification of patterns |00>,|01 >, |10> and |11> respectively on the second iteration of Grover's method or the first operation of Ventura's algorithm.

  15. User's guide to PMESH: A grid-generation program for single-rotation and counterrotation advanced turboprops

    NASA Technical Reports Server (NTRS)

    Warsi, Saif A.

    1989-01-01

    A detailed operating manual is presented for a grid generating program that produces 3-D meshes for advanced turboprops. The code uses both algebraic and elliptic partial differential equation methods to generate single rotation and counterrotation, H or C type meshes for the z - r planes and H type for the z - theta planes. The code allows easy specification of geometrical constraints (such as blade angle, location of bounding surfaces, etc.), mesh control parameters (point distribution near blades and nacelle, number of grid points desired, etc.), and it has good runtime diagnostics. An overview is provided of the mesh generation procedure, sample input dataset with detailed explanation of all input, and example meshes.

  16. Anisotropic mesh adaptation for marine ice-sheet modelling

    NASA Astrophysics Data System (ADS)

    Gillet-Chaulet, Fabien; Tavard, Laure; Merino, Nacho; Peyaud, Vincent; Brondex, Julien; Durand, Gael; Gagliardini, Olivier

    2017-04-01

    Improving forecasts of ice-sheets contribution to sea-level rise requires, amongst others, to correctly model the dynamics of the grounding line (GL), i.e. the line where the ice detaches from its underlying bed and goes afloat on the ocean. Many numerical studies, including the intercomparison exercises MISMIP and MISMIP3D, have shown that grid refinement in the GL vicinity is a key component to obtain reliable results. Improving model accuracy while maintaining the computational cost affordable has then been an important target for the development of marine icesheet models. Adaptive mesh refinement (AMR) is a method where the accuracy of the solution is controlled by spatially adapting the mesh size. It has become popular in models using the finite element method as they naturally deal with unstructured meshes, but block-structured AMR has also been successfully applied to model GL dynamics. The main difficulty with AMR is to find efficient and reliable estimators of the numerical error to control the mesh size. Here, we use the estimator proposed by Frey and Alauzet (2015). Based on the interpolation error, it has been found effective in practice to control the numerical error, and has some flexibility, such as its ability to combine metrics for different variables, that makes it attractive. Routines to compute the anisotropic metric defining the mesh size have been implemented in the finite element ice flow model Elmer/Ice (Gagliardini et al., 2013). The mesh adaptation is performed using the freely available library MMG (Dapogny et al., 2014) called from Elmer/Ice. Using a setup based on the inter-comparison exercise MISMIP+ (Asay-Davis et al., 2016), we study the accuracy of the solution when the mesh is adapted using various variables (ice thickness, velocity, basal drag, …). We show that combining these variables allows to reduce the number of mesh nodes by more than one order of magnitude, for the same numerical accuracy, when compared to uniform mesh refinement. For transient solutions where the GL is moving, we have implemented an algorithm where the computation is reiterated allowing to anticipate the GL displacement and to adapt the mesh to the transient solution. We discuss the performance and robustness of this algorithm.

  17. Antecedent Synoptic Environments Conducive to North American Polar/Subtropical Jet Superpositions

    NASA Astrophysics Data System (ADS)

    Winters, A. C.; Keyser, D.; Bosart, L. F.

    2017-12-01

    The atmosphere often exhibits a three-step pole-to-equator tropopause structure, with each break in the tropopause associated with a jet stream. The polar jet stream (PJ) typically resides in the break between the polar and subtropical tropopause and is positioned atop the strongly baroclinic, tropospheric-deep polar front around 50°N. The subtropical jet stream (STJ) resides in the break between the subtropical and the tropical tropopause and is situated on the poleward edge of the Hadley cell around 30°N. On occasion, the latitudinal separation between the PJ and the STJ can vanish, resulting in a vertical jet superposition. Prior case study work indicates that jet superpositions are often attended by a vigorous transverse vertical circulation that can directly impact the production of extreme weather over North America. Furthermore, this work suggests that there is considerable variability among antecedent environments conducive to the production of jet superpositions. These considerations motivate a comprehensive study to examine the synoptic-dynamic mechanisms that operate within the double-jet environment to produce North American jet superpositions. This study focuses on the identification of North American jet superposition events in the CFSR dataset during November-March 1979-2010. Superposition events will be classified into three characteristic types: "Polar Dominant" events will consist of events during which only the PJ is characterized by a substantial excursion from its climatological latitude band; "Subtropical Dominant" events will consist of events during which only the STJ is characterized by a substantial excursion from its climatological latitude band; and "Hybrid" events will consist of those events characterized by an excursion of both the PJ and STJ from their climatological latitude bands. Following their classification, frequency distributions of jet superpositions will be constructed to highlight the geographical locations most often associated with jet superpositions for each event type. PV inversion and composite analysis will also be performed on each event type in an effort to illustrate the antecedent environments and the dominant synoptic-dynamic mechanisms that favor the production of North American jet superpositions for each event type.

  18. Floating shock fitting via Lagrangian adaptive meshes

    NASA Technical Reports Server (NTRS)

    Vanrosendale, John

    1994-01-01

    In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.

  19. Effect of ground control mesh on dust sampling and explosion mitigation.

    PubMed

    Alexander, D W; Chasko, L L

    2015-07-01

    Researchers from the National Institute for Occupational Safety and Health's Office of Mine Safety and Health Research conducted an assessment of the effects that ground control mesh might have on rock and float coal dust distribution in a coal mine. The increased use of mesh to control roof and rib spall introduces additional elevated surfaces on which rock or coal dust can collect. It is possible to increase the potential for dust explosion propagation if any float coal dust is not adequately inerted. In addition, the mesh may interfere with the collection of representative dust samples when using the pan-and-brush sampling method developed by the U.S. Bureau of Mines and used by the Mine Safety and Health Administration for band sampling. This study estimates the additional coal or rock dust that could accumulate on mesh and develops a means to collect representative dust samples from meshed entries.

  20. The continuing challenge of parastomal hernia: failure of a novel polypropylene mesh repair.

    PubMed Central

    Morris-Stiff, G.; Hughes, L. E.

    1998-01-01

    In an attempt to reduce the high recurrence rate after repair of parastomal hernia, a technique was devised in which non-absorbable mesh was used to provide a permanent closure of the gap between the emerging bowel and abdominal wall. Seven patients were treated during the period 1990-1992. Five-year follow-up has given disappointing results, with recurrent hernia in 29% of cases and serious complications, including obstruction and dense adhesions to the intra-abdominal mesh, in 57% and a mesh-related abscess in 15% of cases. This study highlights a dual problem--failure of a carefully sutured mesh to maintain an occlusive position, and complications of the mesh itself. The poor results obtained with this technique together with the disappointing results with other methods described in the literature confirms that parastomal hernia presents a continuing challenge. Images Figure 1 Figure 2 PMID:9682640

  1. Effect of ground control mesh on dust sampling and explosion mitigation

    PubMed Central

    Alexander, D.W.; Chasko, L.L.

    2017-01-01

    Researchers from the National Institute for Occupational Safety and Health’s Office of Mine Safety and Health Research conducted an assessment of the effects that ground control mesh might have on rock and float coal dust distribution in a coal mine. The increased use of mesh to control roof and rib spall introduces additional elevated surfaces on which rock or coal dust can collect. It is possible to increase the potential for dust explosion propagation if any float coal dust is not adequately inerted. In addition, the mesh may interfere with the collection of representative dust samples when using the pan-and-brush sampling method developed by the U.S. Bureau of Mines and used by the Mine Safety and Health Administration for band sampling. This study estimates the additional coal or rock dust that could accumulate on mesh and develops a means to collect representative dust samples from meshed entries. PMID:28936000

  2. 3D active shape models of human brain structures: application to patient-specific mesh generation

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Castro-Mateos, Isaac; Pozo, Jose M.; Frangi, Alejandro F.; Taylor, Zeike A.

    2015-03-01

    The use of biomechanics-based numerical simulations has attracted growing interest in recent years for computer-aided diagnosis and treatment planning. With this in mind, a method for automatic mesh generation of brain structures of interest, using statistical models of shape (SSM) and appearance (SAM), for personalised computational modelling is presented. SSMs are constructed as point distribution models (PDMs) while SAMs are trained using intensity profiles sampled from a training set of T1-weighted magnetic resonance images. The brain structures of interest are, the cortical surface (cerebrum, cerebellum & brainstem), lateral ventricles and falx-cerebri membrane. Two methods for establishing correspondences across the training set of shapes are investigated and compared (based on SSM quality): the Coherent Point Drift (CPD) point-set registration method and B-spline mesh-to-mesh registration method. The MNI-305 (Montreal Neurological Institute) average brain atlas is used to generate the template mesh, which is deformed and registered to each training case, to establish correspondence over the training set of shapes. 18 healthy patients' T1-weightedMRimages form the training set used to generate the SSM and SAM. Both model-training and model-fitting are performed over multiple brain structures simultaneously. Compactness and generalisation errors of the BSpline-SSM and CPD-SSM are evaluated and used to quantitatively compare the SSMs. Leave-one-out cross validation is used to evaluate SSM quality in terms of these measures. The mesh-based SSM is found to generalise better and is more compact, relative to the CPD-based SSM. Quality of the best-fit model instance from the trained SSMs, to test cases are evaluated using the Hausdorff distance (HD) and mean absolute surface distance (MASD) metrics.

  3. A Spectral Element Discretisation on Unstructured Triangle / Tetrahedral Meshes for Elastodynamics

    NASA Astrophysics Data System (ADS)

    May, Dave A.; Gabriel, Alice-A.

    2017-04-01

    The spectral element method (SEM) defined over quadrilateral and hexahedral element geometries has proven to be a fast, accurate and scalable approach to study wave propagation phenomena. In the context of regional scale seismology and or simulations incorporating finite earthquake sources, the geometric restrictions associated with hexahedral elements can limit the applicability of the classical quad./hex. SEM. Here we describe a continuous Galerkin spectral element discretisation defined over unstructured meshes composed of triangles (2D), or tetrahedra (3D). The method uses a stable, nodal basis constructed from PKD polynomials and thus retains the spectral accuracy and low dispersive properties of the classical SEM, in addition to the geometric versatility provided by unstructured simplex meshes. For the particular basis and quadrature rule we have adopted, the discretisation results in a mass matrix which is not diagonal, thereby mandating linear solvers be utilised. To that end, we have developed efficient solvers and preconditioners which are robust with respect to the polynomial order (p), and possess high arithmetic intensity. Furthermore, we also consider using implicit time integrators, together with a p-multigrid preconditioner to circumvent the CFL condition. Implicit time integrators become particularly relevant when considering solving problems on poor quality meshes, or meshes containing elements with a widely varying range of length scales - both of which frequently arise when meshing non-trivial geometries. We demonstrate the applicability of the new method by examining a number of two- and three-dimensional wave propagation scenarios. These scenarios serve to characterise the accuracy and cost of the new method. Lastly, we will assess the potential benefits of using implicit time integrators for regional scale wave propagation simulations.

  4. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation.

    PubMed

    Jimeno-Yepes, Antonio J; McInnes, Bridget T; Aronson, Alan R

    2011-06-02

    Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.

  5. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Yang, Henry T. Y.; Batina, John T.

    1992-01-01

    Quality assessment procedures are described for two-dimensional and three-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate the accuracy of an implicit upwind Euler solution algorithm.

  6. A method for rapidly marking adult varroa mites for use in brood inoculation experiments

    USDA-ARS?s Scientific Manuscript database

    We explored a method for marking varroa mites using correction fluid (PRESTO!TM Jumbo Correction Pen, Pentel Co., Ltd., Japan). Individual mites were placed on a piece of nylon mesh (165 mesh) to prevent the mites from moving during marking. A small piece of nylon fishing line (diameter = 0.30 mm)...

  7. Implicit solvers for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, Dimitri J.

    1991-01-01

    Implicit methods were developed and tested for unstructured mesh computations. The approximate system which arises from the Newton linearization of the nonlinear evolution operator is solved by using the preconditioned GMRES (Generalized Minimum Residual) technique. Three different preconditioners were studied, namely, the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over relaxation (SSOR). The preconditioners were optimized to have good vectorization properties. SSOR and ILU were also studied as iterative schemes. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also studied. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.

  8. A Molecular Dynamic Modeling of Hemoglobin-Hemoglobin Interactions

    NASA Astrophysics Data System (ADS)

    Wu, Tao; Yang, Ye; Sheldon Wang, X.; Cohen, Barry; Ge, Hongya

    2010-05-01

    In this paper, we present a study of hemoglobin-hemoglobin interaction with model reduction methods. We begin with a simple spring-mass system with given parameters (mass and stiffness). With this known system, we compare the mode superposition method with Singular Value Decomposition (SVD) based Principal Component Analysis (PCA). Through PCA we are able to recover the principal direction of this system, namely the model direction. This model direction will be matched with the eigenvector derived from mode superposition analysis. The same technique will be implemented in a much more complicated hemoglobin-hemoglobin molecule interaction model, in which thousands of atoms in hemoglobin molecules are coupled with tens of thousands of T3 water molecule models. In this model, complex inter-atomic and inter-molecular potentials are replaced by nonlinear springs. We employ the same method to get the most significant modes and their frequencies of this complex dynamical system. More complex physical phenomena can then be further studied by these coarse grained models.

  9. Polypropylene Surgical Mesh Coated with Extracellular Matrix Mitigates the Host Foreign Body Response

    PubMed Central

    Wolf, Matthew T.; Carruthers, Christopher A.; Dearth, Christopher L.; Crapo, Peter M.; Huber, Alexander; Burnsed, Olivia A.; Londono, Ricardo; Johnson, Scott A.; Daly, Kerry A.; Stahl, Elizabeth C.; Freund, John M.; Medberry, Christopher J.; Carey, Lisa E.; Nieponice, Alejandro; Amoroso, Nicholas J.; Badylak, Stephen F.

    2013-01-01

    Surgical mesh devices composed of synthetic materials are commonly used for ventral hernia repair. These materials provide robust mechanical strength and are quickly incorporated into host tissue; factors which contribute to reduced hernia recurrence rates. However, such mesh devices cause a foreign body response with the associated complications of fibrosis and patient discomfort. In contrast, surgical mesh devices composed of naturally occurring extracellular matrix (ECM) are associated with constructive tissue remodeling, but lack the mechanical strength of synthetic materials. A method for applying a porcine dermal ECM hydrogel coating to a polypropylene mesh is described herein with the associated effects upon the host tissue response and biaxial mechanical behavior. Uncoated and ECM coated heavy-weight BARD™ Mesh were compared to the light-weight ULTRAPRO™ and BARD™ Soft Mesh devices in a rat partial thickness abdominal defect overlay model. The ECM coated mesh attenuated the pro-inflammatory response compared to all other devices, with a reduced cell accumulation and fewer foreign body giant cells. The ECM coating degraded by 35 days, and was replaced with loose connective tissue compared to the dense collagenous tissue associated with the uncoated polypropylene mesh device. Biaxial mechanical characterization showed that all of the mesh devices were of similar isotropic stiffness. Upon explantation, the light-weight mesh devices were more compliant than the coated or uncoated heavy-weight devices. The present study shows that an ECM coating alters the default host response to a polypropylene mesh, but not the mechanical properties in an acute in vivo abdominal repair model. PMID:23873846

  10. Exploiting Superconvergence in Discontinuous Galerkin Methods for Improved Time-Stepping and Visualization

    DTIC Science & Technology

    2016-09-08

    Accuracy Conserving (SIAC) filter when applied to nonuniform meshes; 2) Theoretically and numerical demonstration of the 2k+1 order accuracy of the SIAC...Establishing a more theoretical and numerical understanding of a computationally efficient scaling for the SIAC filter for nonuniform meshes [7]; 2...Li, “SIAC Filtering of DG Methods – Boundary and Nonuniform Mesh”, International Conference on Spectral and Higher Order Methods (ICOSAHOM

  11. Electric current locator

    DOEpatents

    King, Paul E [Corvallis, OR; Woodside, Charles Rigel [Corvallis, OR

    2012-02-07

    The disclosure herein provides an apparatus for location of a quantity of current vectors in an electrical device, where the current vector has a known direction and a known relative magnitude to an input current supplied to the electrical device. Mathematical constants used in Biot-Savart superposition equations are determined for the electrical device, the orientation of the apparatus, and relative magnitude of the current vector and the input current, and the apparatus utilizes magnetic field sensors oriented to a sensing plane to provide current vector location based on the solution of the Biot-Savart superposition equations. Description of required orientations between the apparatus and the electrical device are disclosed and various methods of determining the mathematical constants are presented.

  12. On the superposition principle in interference experiments.

    PubMed

    Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi

    2015-05-14

    The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.

  13. Study of odor recorder using Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Miura, Tomohiro; Nakamoto, Takamichi; Moriizumi, Toyosaka

    It is necessary to determine the recipe of a target odor with sufficient accuracy to realize an odor recorder for recording and reproducing it. We studied the recipe measurement method of a target odor using a mass spectrometry. It was confirmed that the linear superposition was valid when the binary mixture of the apple-flavor components such as isobutyric acid and ethyl valerate was measured. The superposition of a mass spectrum pattern may enable the recipe determination of a multi-component odor easily. In this research, we succeeded in the recipe determinations of orange flavor made up of 14 component odors when its typical recipe, the equalized, the citral-enhanced and the citronellol-enhanced ones were measured.

  14. Procedure for Adapting Direct Simulation Monte Carlo Meshes

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Wilmoth, Richard G.; Carlson, Ann B.; Rault, Didier F. G.

    1992-01-01

    A technique is presented for adapting computational meshes used in the G2 version of the direct simulation Monte Carlo method. The physical ideas underlying the technique are discussed, and adaptation formulas are developed for use on solutions generated from an initial mesh. The effect of statistical scatter on adaptation is addressed, and results demonstrate the ability of this technique to achieve more accurate results without increasing necessary computational resources.

  15. Kinetic solvers with adaptive mesh in phase space

    NASA Astrophysics Data System (ADS)

    Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.

    2013-12-01

    An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.

  16. Meshed doped silicon photonic crystals for manipulating near-field thermal radiation

    NASA Astrophysics Data System (ADS)

    Elzouka, Mahmoud; Ndao, Sidy

    2018-01-01

    The ability to control and manipulate heat flow is of great interest to thermal management and thermal logic and memory devices. Particularly, near-field thermal radiation presents a unique opportunity to enhance heat transfer while being able to tailor its characteristics (e.g., spectral selectivity). However, achieving nanometric gaps, necessary for near-field, has been and remains a formidable challenge. Here, we demonstrate significant enhancement of the near-field heat transfer through meshed photonic crystals with separation gaps above 0.5 μm. Using a first-principle method, we investigate the meshed photonic structures numerically via finite-difference time-domain technique (FDTD) along with the Langevin approach. Results for doped-silicon meshed structures show significant enhancement in heat transfer; 26 times over the non-meshed corrugated structures. This is especially important for thermal management and thermal rectification applications. The results also support the premise that thermal radiation at micro scale is a bulk (rather than a surface) phenomenon; the increase in heat transfer between two meshed-corrugated surfaces compared to the flat surface (8.2) wasn't proportional to the increase in the surface area due to the corrugations (9). Results were further validated through good agreements between the resonant modes predicted from the dispersion relation (calculated using a finite-element method), and transmission factors (calculated from FDTD).

  17. Kinetic solvers with adaptive mesh in phase space.

    PubMed

    Arslanbekov, Robert R; Kolobov, Vladimir I; Frolova, Anna A

    2013-12-01

    An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a "tree of trees" (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.

  18. Utilization of flax fibers for biomedical applications.

    PubMed

    Michel, Sophie A A X; Vogels, Ruben R M; Bouvy, Nicole D; Knetsch, Menno L W; van den Akker, Nynke M S; Gijbels, Marion J J; van der Marel, Cees; Vermeersch, Jan; Molin, Daniel G M; Koole, Leo H

    2014-04-01

    Over the past decades, a large number of animal-derived materials have been introduced for several biomedical applications. Surprisingly, the use of plant-based materials has lagged behind. To study the feasibility of plant-derived biomedical materials, we chose flax (Linum usitatissimum). Flax fibers possess excellent physical-mechanical properties, are nonbiodegradable, and there is extensive know-how on weaving/knitting of them. One area where they could be useful is as implantable mesh structures in surgery, in particular for the repair of incisional hernias of the abdominal wall. Starting with a bleached flax thread, a prototype mesh was specifically knitted for this study, and its cytocompatibility was studied in vitro and in vivo. The experimental data revealed that application of flax in surgery first requires a robust method to remove endotoxins and purify the flax fiber. Such a method was developed, and purified meshes did not cause loss of cell viability in vitro. In addition, endotoxins determined using limulus amebocyte lysate test were at acceptable levels. In vivo, the flax meshes showed only mild inflammation, comparable to commercial polypropylene meshes. This study revealed that plant-derived biomaterials can provide a new class of implantable materials that could be used as surgical meshes or for other biomedical applications. Copyright © 2013 Wiley Periodicals, Inc.

  19. Non-classical State via Superposition of Two Opposite Coherent States

    NASA Astrophysics Data System (ADS)

    Ren, Gang; Du, Jian-ming; Yu, Hai-jun

    2018-04-01

    We study the non-classical properties of the states generated by superpositions of two opposite coherent states with the arbitrary relative phase factors. We show that the relative phase factors plays an important role in these superpositions. We demonstrate this result by discussing their squeezing properties, quantum statistical properties and fidelity in principle.

  20. Ultrafast creation of large Schrödinger cat states of an atom.

    PubMed

    Johnson, K G; Wong-Campos, J D; Neyenhuis, B; Mizrahi, J; Monroe, C

    2017-09-26

    Mesoscopic quantum superpositions, or Schrödinger cat states, are widely studied for fundamental investigations of quantum measurement and decoherence as well as applications in sensing and quantum information science. The generation and maintenance of such states relies upon a balance between efficient external coherent control of the system and sufficient isolation from the environment. Here we create a variety of cat states of a single trapped atom's motion in a harmonic oscillator using ultrafast laser pulses. These pulses produce high fidelity impulsive forces that separate the atom into widely separated positions, without restrictions that typically limit the speed of the interaction or the size and complexity of the resulting motional superposition. This allows us to quickly generate and measure cat states larger than previously achieved in a harmonic oscillator, and create complex multi-component superposition states in atoms.Generation of mesoscopic quantum superpositions requires both reliable coherent control and isolation from the environment. Here, the authors succeed in creating a variety of cat states of a single trapped atom, mapping spin superpositions into spatial superpositions using ultrafast laser pulses.

  1. Array-based, parallel hierarchical mesh refinement algorithms for unstructured meshes

    DOE PAGES

    Ray, Navamita; Grindeanu, Iulian; Zhao, Xinglin; ...

    2016-08-18

    In this paper, we describe an array-based hierarchical mesh refinement capability through uniform refinement of unstructured meshes for efficient solution of PDE's using finite element methods and multigrid solvers. A multi-degree, multi-dimensional and multi-level framework is designed to generate the nested hierarchies from an initial coarse mesh that can be used for a variety of purposes such as in multigrid solvers/preconditioners, to do solution convergence and verification studies and to improve overall parallel efficiency by decreasing I/O bandwidth requirements (by loading smaller meshes and in memory refinement). We also describe a high-order boundary reconstruction capability that can be used tomore » project the new points after refinement using high-order approximations instead of linear projection in order to minimize and provide more control on geometrical errors introduced by curved boundaries.The capability is developed under the parallel unstructured mesh framework "Mesh Oriented dAtaBase" (MOAB Tautges et al. (2004)). We describe the underlying data structures and algorithms to generate such hierarchies in parallel and present numerical results for computational efficiency and effect on mesh quality. Furthermore, we also present results to demonstrate the applicability of the developed capability to study convergence properties of different point projection schemes for various mesh hierarchies and to a multigrid finite-element solver for elliptic problems.« less

  2. Effects of Titanium Mesh Surfaces-Coated with Hydroxyapatite/β-Tricalcium Phosphate Nanotubes on Acetabular Bone Defects in Rabbits

    PubMed Central

    Nguyen, Thuy-Duong Thi; Bae, Tae-Sung; Yang, Dae-hyeok; Park, Myung-sik; Yoon, Sun-jung

    2017-01-01

    The management of severe acetabular bone defects in revision reconstructive orthopedic surgery is challenging. In this study, cyclic precalcification (CP) treatment was used on both nanotube-surface Ti-mesh and a bone graft substitute for the acetabular defect model, and its effects were assessed in vitro and in vivo. Nanotube-Ti mesh coated with hydroxyapatite/β-tricalcium phosphate (HA/β-TCP) was manufactured by an anodizing and a sintering method, respectively. An 8 mm diameter defect was created on each acetabulum of eight rabbits, then treated by grafting materials and covered by Ti meshes. At four and eight weeks, postoperatively, biopsies were performed for histomorphometric analyses. The newly-formed bone layers under cyclic precalcified anodized Ti (CP-AT) meshes were superior with regard to the mineralized area at both four and eight weeks, as compared with that under untreated Ti meshes. Active bone regeneration at 2–4 weeks was stronger than at 6–8 weeks, particularly with treated biphasic ceramic (p < 0.05). CP improved the bioactivity of Ti meshes and biphasic grafting materials. Moreover, the precalcified nanotubular Ti meshes could enhance early contact bone formation on the mesh and, therefore, may reduce the collapse of Ti meshes into the defect, increasing the sufficiency of acetabular reconstruction. Finally, cyclic precalcification did not affect bone regeneration by biphasic grafting materials in vivo. PMID:28686210

  3. Unstructured mesh adaptivity for urban flooding modelling

    NASA Astrophysics Data System (ADS)

    Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.

    2018-05-01

    Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.

  4. The optimization of high resolution topographic data for 1D hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Ales, Ronovsky; Michal, Podhoranyi

    2016-06-01

    The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domain and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.

  5. Local meshing plane analysis as a source of information about the gear quality

    NASA Astrophysics Data System (ADS)

    Mączak, Jędrzej

    2013-07-01

    In the paper the application of the local meshing plane concept is discussed and applied for detecting of tooth degradation due to fatigue, and for overall gear quality assessment. Knowing the kinematic properties of the machine (i.e. gear tooth numbers) it is possible to modify the diagnostic signal in such a manner that its fragments will be linked to different rotating parts. This allows for presentation of either raw or processed gearbox signal in a form of three dimensional map on the plane "pinion teeth×gear teeth", called local meshing plane. The meshing plane in Cartesian coordinates z1×z2 allows for precise location and assessment of gear faults in terms of meshing quality of consecutive tooth pairs. Although the method was applied to simulated signals generated by the gearbox model, similar results were obtained for the measurement signals recorded during the back-to-back test stand experiment. The described method could be used for assessing the manufacturing quality of gears, the assembly quality as well as for the gear failure evaluation during normal exploitation.

  6. An improved time-varying mesh stiffness model for helical gear pairs considering axial mesh force component

    NASA Astrophysics Data System (ADS)

    Wang, Qibin; Zhao, Bo; Fu, Yang; Kong, Xianguang; Ma, Hui

    2018-06-01

    An improved time-varying mesh stiffness (TVMS) model of a helical gear pair is proposed, in which the total mesh stiffness contains not only the common transverse tooth bending stiffness, transverse tooth shear stiffness, transverse tooth radial compressive stiffness, transverse gear foundation stiffness and Hertzian contact stiffness, but also the axial tooth bending stiffness, axial tooth torsional stiffness and axial gear foundation stiffness proposed in this paper. In addition, a rapid TVMS calculation method is proposed. Considering each stiffness component, the TVMS can be calculated by the integration along the tooth width direction. Then, three cases are applied to validate the developed model. The results demonstrate that the proposed analytical method is accurate, effective and efficient for helical gear pairs and the axial mesh stiffness should be taken into consideration in the TVMS of a helical gear pair. Finally, influences of the helix angle on TVMS are studied. The results show that the improved TVMS model is effective for any helix angle and the traditional TVMS model is only effective under a small helix angle.

  7. An embedded mesh method using piecewise constant multipliers with stabilization: mathematical and numerical aspects

    DOE PAGES

    Puso, M. A.; Kokko, E.; Settgast, R.; ...

    2014-10-22

    An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less

  8. The optimization of high resolution topographic data for 1D hydrodynamic models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ales, Ronovsky, E-mail: ales.ronovsky@vsb.cz; Michal, Podhoranyi

    2016-06-08

    The main focus of our research presented in this paper is to optimize and use high resolution topographical data (HRTD) for hydrological modelling. Optimization of HRTD is done by generating adaptive mesh by measuring distance of coarse mesh and the surface of the dataset and adapting the mesh from the perspective of keeping the geometry as close to initial resolution as possible. Technique described in this paper enables computation of very accurate 1-D hydrodynamic models. In the paper, we use HEC-RAS software as a solver. For comparison, we have chosen the amount of generated cells/grid elements (in whole discretization domainmore » and selected cross sections) with respect to preservation of the accuracy of the computational domain. Generation of the mesh for hydrodynamic modelling is strongly reliant on domain size and domain resolution. Topographical dataset used in this paper was created using LiDAR method and it captures 5.9km long section of a catchment of the river Olše. We studied crucial changes in topography for generated mesh. Assessment was done by commonly used statistical and visualization methods.« less

  9. A cute and highly contrast-sensitive superposition eye - the diurnal owlfly Libelloides macaronius.

    PubMed

    Belušič, Gregor; Pirih, Primož; Stavenga, Doekele G

    2013-06-01

    The owlfly Libelloides macaronius (Insecta: Neuroptera) has large bipartite eyes of the superposition type. The spatial resolution and sensitivity of the photoreceptor array in the dorsofrontal eye part was studied with optical and electrophysiological methods. Using structured illumination microscopy, the interommatidial angle in the central part of the dorsofrontal eye was determined to be Δϕ=1.1 deg. Eye shine measurements with an epi-illumination microscope yielded an effective superposition pupil size of about 300 facets. Intracellular recordings confirmed that all photoreceptors were UV-receptors (λmax=350 nm). The average photoreceptor acceptance angle was 1.8 deg, with a minimum of 1.4 deg. The receptor dynamic range was two log units, and the Hill coefficient of the intensity-response function was n=1.2. The signal-to-noise ratio of the receptor potential was remarkably high and constant across the whole dynamic range (root mean square r.m.s. noise=0.5% Vmax). Quantum bumps could not be observed at any light intensity, indicating low voltage gain. Presumably, the combination of large aperture superposition optics feeding an achromatic array of relatively insensitive receptors with a steep intensity-response function creates a low-noise, high spatial acuity instrument. The sensitivity shift to the UV range reduces the clutter created by clouds within the sky image. These properties of the visual system are optimal for detecting small insect prey as contrasting spots against both clear and cloudy skies.

  10. MultiSETTER: web server for multiple RNA structure comparison.

    PubMed

    Čech, Petr; Hoksza, David; Svozil, Daniel

    2015-08-12

    Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.

  11. Variations in Medical Subject Headings (MeSH) mapping: from the natural language of patron terms to the controlled vocabulary of mapped lists*

    PubMed Central

    Gault, Lora V.; Shultz, Mary; Davies, Kathy J.

    2002-01-01

    Objectives: This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Methods: Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. Results: The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. Conclusions: If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users. PMID:11999175

  12. A matrix dependent/algebraic multigrid approach for extruded meshes with applications to ice sheet modeling

    DOE PAGES

    Tuminaro, Raymond S.; Perego, Mauro; Tezaur, Irina Kalashnikova; ...

    2016-10-06

    A multigrid method is proposed that combines ideas from matrix dependent multigrid for structured grids and algebraic multigrid for unstructured grids. It targets problems where a three-dimensional mesh can be viewed as an extrusion of a two-dimensional, unstructured mesh in a third dimension. Our motivation comes from the modeling of thin structures via finite elements and, more specifically, the modeling of ice sheets. Extruded meshes are relatively common for thin structures and often give rise to anisotropic problems when the thin direction mesh spacing is much smaller than the broad direction mesh spacing. Within our approach, the first few multigridmore » hierarchy levels are obtained by applying matrix dependent multigrid to semicoarsen in a structured thin direction fashion. After sufficient structured coarsening, the resulting mesh contains only a single layer corresponding to a two-dimensional, unstructured mesh. Algebraic multigrid can then be employed in a standard manner to create further coarse levels, as the anisotropic phenomena is no longer present in the single layer problem. The overall approach remains fully algebraic, with the minor exception that some additional information is needed to determine the extruded direction. Furthermore, this facilitates integration of the solver with a variety of different extruded mesh applications.« less

  13. Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh

    NASA Astrophysics Data System (ADS)

    Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru

    2017-11-01

    We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.

  14. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  15. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  16. A computational method for the coupled solution of reaction-diffusion equations on evolving domains and manifolds: Application to a model of cell migration and chemotaxis.

    PubMed

    MacDonald, G; Mackenzie, J A; Nolan, M; Insall, R H

    2016-03-15

    In this paper, we devise a moving mesh finite element method for the approximate solution of coupled bulk-surface reaction-diffusion equations on an evolving two dimensional domain. Fundamental to the success of the method is the robust generation of bulk and surface meshes. For this purpose, we use a novel moving mesh partial differential equation (MMPDE) approach. The developed method is applied to model problems with known analytical solutions; these experiments indicate second-order spatial and temporal accuracy. Coupled bulk-surface problems occur frequently in many areas; in particular, in the modelling of eukaryotic cell migration and chemotaxis. We apply the method to a model of the two-way interaction of a migrating cell in a chemotactic field, where the bulk region corresponds to the extracellular region and the surface to the cell membrane.

  17. Communication: A novel implementation to compute MP2 correlation energies without basis set superposition errors and complete basis set extrapolation.

    PubMed

    Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario

    2017-06-07

    By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.

  18. Long-distance measurement-device-independent quantum key distribution with coherent-state superpositions.

    PubMed

    Yin, H-L; Cao, W-F; Fu, Y; Tang, Y-L; Liu, Y; Chen, T-Y; Chen, Z-B

    2014-09-15

    Measurement-device-independent quantum key distribution (MDI-QKD) with decoy-state method is believed to be securely applied to defeat various hacking attacks in practical quantum key distribution systems. Recently, the coherent-state superpositions (CSS) have emerged as an alternative to single-photon qubits for quantum information processing and metrology. Here, in this Letter, CSS are exploited as the source in MDI-QKD. We present an analytical method that gives two tight formulas to estimate the lower bound of yield and the upper bound of bit error rate. We exploit the standard statistical analysis and Chernoff bound to perform the parameter estimation. Chernoff bound can provide good bounds in the long-distance MDI-QKD. Our results show that with CSS, both the security transmission distance and secure key rate are significantly improved compared with those of the weak coherent states in the finite-data case.

  19. Practical purification scheme for decohered coherent-state superpositions via partial homodyne detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Shigenari; Department of Electronics and Electrical Engineering, Keio University, 3-14-1, Hiyoshi, Kohoku-ku, Yokohama, 223-8522; Takeoka, Masahiro

    2006-04-15

    We present a simple protocol to purify a coherent-state superposition that has undergone a linear lossy channel. The scheme constitutes only a single beam splitter and a homodyne detector, and thus is experimentally feasible. In practice, a superposition of coherent states is transformed into a classical mixture of coherent states by linear loss, which is usually the dominant decoherence mechanism in optical systems. We also address the possibility of producing a larger amplitude superposition state from decohered states, and show that in most cases the decoherence of the states are amplified along with the amplitude.

  20. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, Thomas E.; Franke, O. Lehn; Bennett, Gordon D.

    1987-01-01

    The principle of superposition, a powerful mathematical technique for analyzing certain types of complex problems in many areas of science and technology, has important applications in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that problem solutions can be added together to obtain composite solutions. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to ground-water hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader.

  1. The principle of superposition and its application in ground-water hydraulics

    USGS Publications Warehouse

    Reilly, T.E.; Franke, O.L.; Bennett, G.D.

    1984-01-01

    The principle of superposition, a powerful methematical technique for analyzing certain types of complex problems in many areas of science and technology, has important application in ground-water hydraulics and modeling of ground-water systems. The principle of superposition states that solutions to individual problems can be added together to obtain solutions to complex problems. This principle applies to linear systems governed by linear differential equations. This report introduces the principle of superposition as it applies to groundwater hydrology and provides background information, discussion, illustrative problems with solutions, and problems to be solved by the reader. (USGS)

  2. Implicit mesh discontinuous Galerkin methods and interfacial gauge methods for high-order accurate interface dynamics, with applications to surface tension dynamics, rigid body fluid-structure interaction, and free surface flow: Part I

    NASA Astrophysics Data System (ADS)

    Saye, Robert

    2017-09-01

    In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.

  3. Implicit mesh discontinuous Galerkin methods and interfacial gauge methods for high-order accurate interface dynamics, with applications to surface tension dynamics, rigid body fluid-structure interaction, and free surface flow: Part II

    NASA Astrophysics Data System (ADS)

    Saye, Robert

    2017-09-01

    In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.

  4. Combination of ray-tracing and the method of moments for electromagnetic radiation analysis using reduced meshes

    NASA Astrophysics Data System (ADS)

    Delgado, Carlos; Cátedra, Manuel Felipe

    2018-05-01

    This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.

  5. Unstructured Adaptive Meshes: Bad for Your Memory?

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Feng, Hui-Yu; VanderWijngaart, Rob

    2003-01-01

    This viewgraph presentation explores the need for a NASA Advanced Supercomputing (NAS) parallel benchmark for problems with irregular dynamical memory access. This benchmark is important and necessary because: 1) Problems with localized error source benefit from adaptive nonuniform meshes; 2) Certain machines perform poorly on such problems; 3) Parallel implementation may provide further performance improvement but is difficult. Some examples of problems which use irregular dynamical memory access include: 1) Heat transfer problem; 2) Heat source term; 3) Spectral element method; 4) Base functions; 5) Elemental discrete equations; 6) Global discrete equations. Nonconforming Mesh and Mortar Element Method are covered in greater detail in this presentation.

  6. A multi-block adaptive solving technique based on lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Xie, Jiahua; Li, Xiaoyue; Ma, Zhenghai; Zou, Jianfeng; Zheng, Yao

    2018-05-01

    In this paper, a CFD parallel adaptive algorithm is self-developed by combining the multi-block Lattice Boltzmann Method (LBM) with Adaptive Mesh Refinement (AMR). The mesh refinement criterion of this algorithm is based on the density, velocity and vortices of the flow field. The refined grid boundary is obtained by extending outward half a ghost cell from the coarse grid boundary, which makes the adaptive mesh more compact and the boundary treatment more convenient. Two numerical examples of the backward step flow separation and the unsteady flow around circular cylinder demonstrate the vortex structure of the cold flow field accurately and specifically.

  7. Video Vectorization via Tetrahedral Remeshing.

    PubMed

    Wang, Chuan; Zhu, Jie; Guo, Yanwen; Wang, Wenping

    2017-02-09

    We present a video vectorization method that generates a video in vector representation from an input video in raster representation. A vector-based video representation offers the benefits of vector graphics, such as compactness and scalability. The vector video we generate is represented by a simplified tetrahedral control mesh over the spatial-temporal video volume, with color attributes defined at the mesh vertices. We present novel techniques for simplification and subdivision of a tetrahedral mesh to achieve high simplification ratio while preserving features and ensuring color fidelity. From an input raster video, our method is capable of generating a compact video in vector representation that allows a faithful reconstruction with low reconstruction errors.

  8. The Space-Time Conservative Schemes for Large-Scale, Time-Accurate Flow Simulations with Tetrahedral Meshes

    NASA Technical Reports Server (NTRS)

    Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung

    2016-01-01

    Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.

  9. A mortar formulation including viscoelastic layers for vibration analysis

    NASA Astrophysics Data System (ADS)

    Paolini, Alexander; Kollmannsberger, Stefan; Rank, Ernst; Horger, Thomas; Wohlmuth, Barbara

    2018-05-01

    In order to reduce the transfer of sound and vibrations in structures such as timber buildings, thin elastomer layers can be embedded between their components. The influence of these elastomers on the response of the structures in the low frequency range can be determined accurately by using conforming hexahedral finite elements. Three-dimensional mesh generation, however, is yet a non-trivial task and mesh refinements which may be necessary at the junctions can cause a high computational effort. One remedy is to mesh the components independently from each other and to couple them using the mortar method. Further, the hexahedral mesh for the thin elastomer layer itself can be avoided by integrating its elastic behavior into the mortar formulation. The present paper extends this mortar formulation to take damping into account such that frequency response analyses can be performed more accurately. Finally, the proposed method is verified by numerical examples.

  10. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode`s and pde`s. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  11. Quality factors and local adaption (with applications in Eulerian hydrodynamics)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowley, W.P.

    1992-06-17

    Adapting the mesh to suit the solution is a technique commonly used for solving both ode's and pde's. For Lagrangian hydrodynamics, ALE and Free-Lagrange are examples of structured and unstructured adaptive methods. For Eulerian hydrodynamics the two basic approaches are the macro-unstructuring technique pioneered by Oliger and Berger and the micro-structuring technique due to Lohner and others. Here we will describe a new micro-unstructuring technique, LAM, (for Local Adaptive Mesh) as applied to Eulerian hydrodynamics. The LAM technique consists of two independent parts: (1) the time advance scheme is a variation on the artificial viscosity method; (2) the adaption schememore » uses a micro-unstructured mesh with quadrilateral mesh elements. The adaption scheme makes use of quality factors and the relation between these and truncation errors is discussed. The time advance scheme; the adaption strategy; and the effect of different adaption parameters on numerical solutions are described.« less

  12. 3D face analysis by using Mesh-LBP feature

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    Objective: Face Recognition is one of the widely application of image processing. Corresponding two-dimensional limitations, such as the pose and illumination changes, to a certain extent restricted its accurate rate and further development. How to overcome the pose and illumination changes and the effects of self-occlusion is the research hotspot and difficulty, also attracting more and more domestic and foreign experts and scholars to study it. 3D face recognition fusing shape and texture descriptors has become a very promising research direction. Method: Our paper presents a 3D point cloud based on mesh local binary pattern grid (Mesh-LBP), then feature extraction for 3D face recognition by fusing shape and texture descriptors. 3D Mesh-LBP not only retains the integrity of the 3D geometry, is also reduces the need for recognition process of normalization steps, because the triangle Mesh-LBP descriptor is calculated on 3D grid. On the other hand, in view of multi-modal consistency in face recognition advantage, construction of LBP can fusing shape and texture information on Triangular Mesh. In this paper, some of the operators used to extract Mesh-LBP, Such as the normal vectors of the triangle each face and vertex, the gaussian curvature, the mean curvature, laplace operator and so on. Conclusion: First, Kinect devices obtain 3D point cloud face, after the pretreatment and normalization, then transform it into triangular grid, grid local binary pattern feature extraction from face key significant parts of face. For each local face, calculate its Mesh-LBP feature with Gaussian curvature, mean curvature laplace operator and so on. Experiments on the our research database, change the method is robust and high recognition accuracy.

  13. Integrating a novel shape memory polymer into surgical meshes to improve device performance during laparoscopic hernia surgery

    NASA Astrophysics Data System (ADS)

    Zimkowski, Michael M.

    About 600,000 hernia repair surgeries are performed each year. The use of laparoscopic minimally invasive techniques has become increasingly popular in these operations. Use of surgical mesh in hernia repair has shown lower recurrence rates compared to other repair methods. However in many procedures, placement of surgical mesh can be challenging and even complicate the procedure, potentially leading to lengthy operating times. Various techniques have been attempted to improve mesh placement, including use of specialized systems to orient the mesh into a specific shape, with limited success and acceptance. In this work, a programmed novel Shape Memory Polymer (SMP) was integrated into commercially available polyester surgical meshes to add automatic unrolling and tissue conforming functionalities, while preserving the intrinsic structural properties of the original surgical mesh. Tensile testing and Dynamic Mechanical Analysis was performed on four different SMP formulas to identify appropriate mechanical properties for surgical mesh integration. In vitro testing involved monitoring the time required for a modified surgical mesh to deploy in a 37°C water bath. An acute porcine model was used to test the in vivo unrolling of SMP integrated surgical meshes. The SMP-integrated surgical meshes produced an automated, temperature activated, controlled deployment of surgical mesh on the order of several seconds, via laparoscopy in the animal model. A 30 day chronic rat model was used to test initial in vivo subcutaneous biocompatibility. To produce large more clinical relevant sizes of mesh, a mold was developed to facilitate manufacturing of SMP-integrated surgical mesh. The mold is capable of manufacturing mesh up to 361 cm2, which is believed to accommodate the majority of clinical cases. Results indicate surgical mesh modified with SMP is capable of laparoscopic deployment in vivo, activated by body temperature, and possesses the necessary strength and biocompatibility to function as suitable ventral hernia repair mesh, while offering a reduction in surgical operating time and improving mesh placement characteristics. Future work will include ball-burst tests similar to ASTM D3787-07, direct surgeon feedback studies, and a 30 day chronic porcine model to evaluate the SMP surgical mesh in a realistic hernia repair environment, using laparoscopic techniques for typical ventral hernia repair.

  14. Teleportation of Unknown Superpositions of Collective Atomic Coherent States

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao

    2001-06-01

    We propose a scheme to teleport an unknown superposition of two atomic coherent states with different phases. Our scheme is based on resonant and dispersive atom-field interaction. Our scheme provides a possibility of teleporting macroscopic superposition states of many atoms first time. The project supported by National Natural Science Foundation of China under Grant No. 60008003

  15. Student Ability to Distinguish between Superposition States and Mixed States in Quantum Mechanics

    ERIC Educational Resources Information Center

    Passante, Gina; Emigh, Paul J.; Shaffer, Peter S.

    2015-01-01

    Superposition gives rise to the probabilistic nature of quantum mechanics and is therefore one of the concepts at the heart of quantum mechanics. Although we have found that many students can successfully use the idea of superposition to calculate the probabilities of different measurement outcomes, they are often unable to identify the…

  16. Nonclassical Properties of Q-Deformed Superposition Light Field State

    NASA Technical Reports Server (NTRS)

    Ren, Min; Shenggui, Wang; Ma, Aiqun; Jiang, Zhuohong

    1996-01-01

    In this paper, the squeezing effect, the bunching effect and the anti-bunching effect of the superposition light field state which involving q-deformation vacuum state and q-Glauber coherent state are studied, the controllable q-parameter of the squeezing effect, the bunching effect and the anti-bunching effect of q-deformed superposition light field state are obtained.

  17. Field collection of Nasonia (parasitoid wasp) using baits.

    PubMed

    Werren, John H; Loehlin, David W

    2009-10-01

    This protocol describes a standard method for collecting Nasonia wasps using "flyliver": liver remains fed upon by fly maggots (e.g., Sarcophaga) and collected after the mature larvae have dispersed. The flyliver, which contains a volatile substance that is very attractive to the wasps, is placed in a large (approximately 15-cm(2)) mesh bag hung in appropriate collection spots (e.g., near birds' nests or under culverts). Within the large mesh bag is a smaller mesh bag placed abutting the flyliver and containing four to six Sarcophaga pupae. These retain the wasps, which will enter the bag and begin stinging the hosts. The mesh bags can be made with standard nylon window screening, or any other material with mesh width large enough to permit entry of the wasps.

  18. Analyte separation utilizing temperature programmed desorption of a preconcentrator mesh

    DOEpatents

    Linker, Kevin L.; Bouchier, Frank A.; Theisen, Lisa; Arakaki, Lester H.

    2007-11-27

    A method and system for controllably releasing contaminants from a contaminated porous metallic mesh by thermally desorbing and releasing a selected subset of contaminants from a contaminated mesh by rapidly raising the mesh to a pre-determined temperature step or plateau that has been chosen beforehand to preferentially desorb a particular chemical specie of interest, but not others. By providing a sufficiently long delay or dwell period in-between heating pulses, and by selecting the optimum plateau temperatures, then different contaminant species can be controllably released in well-defined batches at different times to a chemical detector in gaseous communication with the mesh. For some detectors, such as an Ion Mobility Spectrometer (IMS), separating different species in time before they enter the IMS allows the detector to have an enhanced selectivity.

  19. Investigation on Tensile Fatigue Characteristics of Meshed GUM Metal Plates for Bone Graft Applications

    NASA Astrophysics Data System (ADS)

    Sekiguchi, Koki; He, Jianmei

    2017-11-01

    GUM Metal has characteristics of lower elasticity rigidity, large elastic deformation, higher strength and biocompatibility etc. When it is used for implant applications, there is still problem like overloading on the natural-bone because of its high rigidity compared with the human bones. Therefore, the purpose of this study is to create more flexible meshed plates for implant applications from the viewpoints of elastic rigidity and volume density. Basic mesh shapes are designed, devised and applied for meshed GUM Metal plates using three dimensional (3D) CAD tools. Experimental evaluation on tensile fatigue characteristics of meshed GUM Metal plate specimens are carried out. Analytical approaches on stress evaluation are also executed through finite element method to obtain the S-N curve for fatigue characteristic evaluation.

  20. Molecular modelling studies on the ORL1-receptor and ORL1-agonists

    NASA Astrophysics Data System (ADS)

    Bröer, Britta M.; Gurrath, Marion; Höltje, Hans-Dieter

    2003-11-01

    The ORL1 ( opioid receptor like 1)- receptor is a member of the family of rhodopsin-like G protein-coupled receptors (GPCR) and represents an interesting new therapeutical target since it is involved in a variety of biomedical important processes, such as anxiety, nociception, feeding, and memory. In order to shed light on the molecular basis of the interactions of the GPCR with its ligands, the receptor protein and a dataset of specific agonists were examined using molecular modelling methods. For that purpose, the conformational space of a very potent non-peptide ORL1-receptor agonist (Ro 64-6198) with a small number of rotatable bonds was analysed in order to derive a pharmacophoric arrangement. The conformational analyses yielded a conformation that served as template for the superposition of a set of related analogues. Structural superposition was achieved by employing the program FlexS. Using the experimental binding data and the superposition of the ligands, a 3D-QSAR analysis applying the GRID/GOLPE method was carried out. After the ligand-based modelling approach, a 3D model of the ORL1-receptor has been constructed using homology modelling methods based on the crystal structure of bovine rhodopsin. A representative structure of the model taken from molecular dynamics simulations was used for a manual docking procedure. Asp-130 and Thr-305 within the ORL1-receptor model served as important hydrophilic interaction partners. Furthermore, a hydrophobic cavity was identified stabilizing the agonists within their binding site. The manual docking results were supported using FlexX, which identified the same protein-ligand interaction points.

  1. An ODE-Based Wall Model for Turbulent Flow Simulations

    NASA Technical Reports Server (NTRS)

    Berger, Marsha J.; Aftosmis, Michael J.

    2017-01-01

    Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.

  2. Automatic construction of patient-specific finite-element mesh of the spine from IVDs and vertebra segmentations

    NASA Astrophysics Data System (ADS)

    Castro-Mateos, Isaac; Pozo, Jose M.; Lazary, Aron; Frangi, Alejandro F.

    2016-03-01

    Computational medicine aims at developing patient-specific models to help physicians in the diagnosis and treatment selection for patients. The spine, and other skeletal structures, is an articulated object, composed of rigid bones (vertebrae) and non-rigid parts (intervertebral discs (IVD), ligaments and muscles). These components are usually extracted from different image modalities, involving patient repositioning. In the case of the spine, these models require the segmentation of IVDs from MR and vertebrae from CT. In the literature, there exists a vast selection of segmentations methods, but there is a lack of approaches to align the vertebrae and IVDs. This paper presents a method to create patient-specific finite element meshes for biomechanical simulations, integrating rigid and non-rigid parts of articulated objects. First, the different parts are aligned in a complete surface model. Vertebrae extracted from CT are rigidly repositioned in between the IVDs, initially using the IVDs location and then refining the alignment using the MR image with a rigid active shape model algorithm. Finally, a mesh morphing algorithm, based on B-splines, is employed to map a template finite-element (volumetric) mesh to the patient-specific surface mesh. This morphing reduces possible misalignments and guarantees the convexity of the model elements. Results show that the accuracy of the method to align vertebrae into MR, together with IVDs, is similar to that of the human observers. Thus, this method is a step forward towards the automation of patient-specific finite element models for biomechanical simulations.

  3. 50 CFR 648.80 - NE Multispecies regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 50 Wildlife and Fisheries 12 2013-10-01 2013-10-01 false NE Multispecies regulated mesh areas and restrictions on gear and methods of fishing. 648.80 Section 648.80 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES...

  4. 50 CFR 648.80 - NE Multispecies regulated mesh areas and restrictions on gear and methods of fishing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 50 Wildlife and Fisheries 12 2014-10-01 2014-10-01 false NE Multispecies regulated mesh areas and restrictions on gear and methods of fishing. 648.80 Section 648.80 Wildlife and Fisheries FISHERY CONSERVATION AND MANAGEMENT, NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE FISHERIES OF THE NORTHEASTERN UNITED STATES...

  5. Mesh fixation in endoscopic inguinal hernia repair: evaluation of methodology based on a systematic review of randomised clinical trials.

    PubMed

    Lederhuber, Hans; Stiede, Franziska; Axer, Stephan; Dahlstrand, Ursula

    2017-11-01

    The issue of mesh fixation in endoscopic inguinal hernia repair is frequently debated and still no conclusive data exist on differences between methods regarding long-term outcome and postoperative complications. The quantity of trials and the simultaneous lack of high-quality evidence raise the question how future trials should be planned. PubMed, EMBASE and the Cochrane Library were searched, using the filters "randomised clinical trials" and "humans". Trials that compared one method of mesh fixation with another fixation method or with non-fixation in endoscopic inguinal hernia repair were eligible. To be included, the trial was required to have assessed at least one of the following primary outcome parameters: recurrence; surgical site infection; chronic pain; or quality-of-life. Fourteen trials assessing 2161 patients and 2562 hernia repairs were included. Only two trials were rated as low risk for bias. Eight trials evaluated recurrence or surgical site infection; none of these could show significant differences between methods of fixation. Two of 11 trials assessing chronic pain described significant differences between methods of fixation. One of two trials evaluating quality-of-life showed significant differences between fixation methods in certain functions. High-quality evidence for differences between the assessed mesh fixation techniques is still lacking. From a socioeconomic and ethical point of view, it is necessary that future trials will be properly designed. As small- and medium-sized single-centre trials have proven unable to find answers, register studies or multi-centre studies with an evident focus on methodology and study design are needed in order to answer questions about mesh fixation in inguinal hernia repair.

  6. Recent advances in high-order WENO finite volume methods for compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael

    2013-10-01

    We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.

  7. M-Adapting Low Order Mimetic Finite Differences for Dielectric Interface Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGregor, Duncan A.; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-03-07

    We consider a problem of reducing numerical dispersion for electromagnetic wave in the domain with two materials separated by a at interface in 2D with a factor of two di erence in wave speed. The computational mesh in the homogeneous parts of the domain away from the interface consists of square elements. Here the method construction is based on m-adaptation construction in homogeneous domain that leads to fourth-order numerical dispersion (vs. second order in non-optimized method). The size of the elements in two domains also di ers by a factor of two, so as to preserve the same value ofmore » Courant number in each. Near the interface where two meshes merge the mesh with larger elements consists of degenerate pentagons. We demonstrate that prior to m-adaptation the accuracy of the method falls from second to rst due to breaking of symmetry in the mesh. Next we develop m-adaptation framework for the interface region and devise an optimization criteria. We prove that for the interface problem m-adaptation cannot produce increase in method accuracy. This is in contrast to homogeneous medium where m-adaptation can increase accuracy by two orders.« less

  8. Airplane Mesh Development with Grid Density Studies

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Thomas, Scott D.; Lawrence, Scott L.; Rimlinger, Mark J.

    1999-01-01

    Automatic Grid Generation Wish List Geometry handling, including CAD clean up and mesh generation, remains a major bottleneck in the application of CFD methods. There is a pressing need for greater automation in several aspects of the geometry preparation in order to reduce set up time and eliminate user intervention as much as possible. Starting from the CAD representation of a configuration, there may be holes or overlapping surfaces which require an intensive effort to establish cleanly abutting surface patches, and collections of many patches may need to be combined for more efficient use of the geometrical representation. Obtaining an accurate and suitable body conforming grid with an adequate distribution of points throughout the flow-field, for the flow conditions of interest, is often the most time consuming task for complex CFD applications. There is a need for a clean unambiguous definition of the CAD geometry. Ideally this would be carried out automatically by smart CAD clean up software. One could also define a standard piece-wise smooth surface representation suitable for use by computational methods and then create software to translate between the various CAD descriptions and the standard representation. Surface meshing remains a time consuming, user intensive procedure. There is a need for automated surface meshing, requiring only minimal user intervention to define the overall density of mesh points. The surface mesher should produce well shaped elements (triangles or quadrilaterals) whose size is determined initially according to the surface curvature with a minimum size for flat pieces, and later refined by the user in other regions if necessary. Present techniques for volume meshing all require some degree of user intervention. There is a need for fully automated and reliable volume mesh generation. In addition, it should be possible to create both surface and volume meshes that meet guaranteed measures of mesh quality (e.g. minimum and maximum angle, stretching ratios, etc.).

  9. Adapting to life: ocean biogeochemical modelling and adaptive remeshing

    NASA Astrophysics Data System (ADS)

    Hill, J.; Popova, E. E.; Ham, D. A.; Piggott, M. D.; Srokosz, M.

    2014-05-01

    An outstanding problem in biogeochemical modelling of the ocean is that many of the key processes occur intermittently at small scales, such as the sub-mesoscale, that are not well represented in global ocean models. This is partly due to their failure to resolve sub-mesoscale phenomena, which play a significant role in vertical nutrient supply. Simply increasing the resolution of the models may be an inefficient computational solution to this problem. An approach based on recent advances in adaptive mesh computational techniques may offer an alternative. Here the first steps in such an approach are described, using the example of a simple vertical column (quasi-1-D) ocean biogeochemical model. We present a novel method of simulating ocean biogeochemical behaviour on a vertically adaptive computational mesh, where the mesh changes in response to the biogeochemical and physical state of the system throughout the simulation. We show that the model reproduces the general physical and biological behaviour at three ocean stations (India, Papa and Bermuda) as compared to a high-resolution fixed mesh simulation and to observations. The use of an adaptive mesh does not increase the computational error, but reduces the number of mesh elements by a factor of 2-3. Unlike previous work the adaptivity metric used is flexible and we show that capturing the physical behaviour of the model is paramount to achieving a reasonable solution. Adding biological quantities to the adaptivity metric further refines the solution. We then show the potential of this method in two case studies where we change the adaptivity metric used to determine the varying mesh sizes in order to capture the dynamics of chlorophyll at Bermuda and sinking detritus at Papa. We therefore demonstrate that adaptive meshes may provide a suitable numerical technique for simulating seasonal or transient biogeochemical behaviour at high vertical resolution whilst minimising the number of elements in the mesh. More work is required to move this to fully 3-D simulations.

  10. Discrete differential geometry: The nonplanar quadrilateral mesh

    NASA Astrophysics Data System (ADS)

    Twining, Carole J.; Marsland, Stephen

    2012-06-01

    We consider the problem of constructing a discrete differential geometry defined on nonplanar quadrilateral meshes. Physical models on discrete nonflat spaces are of inherent interest, as well as being used in applications such as computation for electromagnetism, fluid mechanics, and image analysis. However, the majority of analysis has focused on triangulated meshes. We consider two approaches: discretizing the tensor calculus, and a discrete mesh version of differential forms. While these two approaches are equivalent in the continuum, we show that this is not true in the discrete case. Nevertheless, we show that it is possible to construct mesh versions of the Levi-Civita connection (and hence the tensorial covariant derivative and the associated covariant exterior derivative), the torsion, and the curvature. We show how discrete analogs of the usual vector integral theorems are constructed in such a way that the appropriate conservation laws hold exactly on the mesh, rather than only as approximations to the continuum limit. We demonstrate the success of our method by constructing a mesh version of classical electromagnetism and discuss how our formalism could be used to deal with other physical models, such as fluids.

  11. Cart3D Simulations for the Second AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2017-01-01

    Simulation results are presented for all test cases prescribed in the Second AIAA Sonic Boom Prediction Workshop. For each of the four nearfield test cases, we compute pressure signatures at specified distances and off-track angles, using an inviscid, embedded-boundary Cartesian-mesh flow solver with output-based mesh adaptation. The cases range in complexity from an axisymmetric body to a full low-boom aircraft configuration with a powered nacelle. For efficiency, boom carpets are decomposed into sets of independent meshes and computed in parallel. This also facilitates the use of more effective meshing strategies - each off-track angle is computed on a mesh with good azimuthal alignment, higher aspect ratio cells, and more tailored adaptation. The nearfield signatures generally exhibit good convergence with mesh refinement. We introduce a local error estimation procedure to highlight regions of the signatures most sensitive to mesh refinement. Results are also presented for the two propagation test cases, which investigate the effects of atmospheric profiles on ground noise. Propagation is handled with an augmented Burgers' equation method (NASA's sBOOM), and ground noise metrics are computed with LCASB.

  12. Testing the quantum superposition principle: matter waves and beyond

    NASA Astrophysics Data System (ADS)

    Ulbricht, Hendrik

    2015-05-01

    New technological developments allow to explore the quantum properties of very complex systems, bringing the question of whether also macroscopic systems share such features, within experimental reach. The interest in this question is increased by the fact that, on the theory side, many suggest that the quantum superposition principle is not exact, departures from it being the larger, the more macroscopic the system. Testing the superposition principle intrinsically also means to test suggested extensions of quantum theory, so-called collapse models. We will report on three new proposals to experimentally test the superposition principle with nanoparticle interferometry, optomechanical devices and by spectroscopic experiments in the frequency domain. We will also report on the status of optical levitation and cooling experiments with nanoparticles in our labs, towards an Earth bound matter-wave interferometer to test the superposition principle for a particle mass of one million amu (atomic mass unit).

  13. Ray tracing through a hexahedral mesh in HADES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Henderson, G L; Aufderheide, M B

    In this paper we describe a new ray tracing method targeted for inclusion in HADES. The algorithm tracks rays through three-dimensional tetrakis hexahedral mesh objects, like those used by the ARES code to model inertial confinement experiments.

  14. Mesh Displacement After Bilateral Inguinal Hernia Repair With No Fixation

    PubMed Central

    Rocha, Gabriela Moreira; Campos, Antonio Carlos Ligocki; Paulin, João Augusto Nocera; Coelho, Julio Cesar Uili

    2017-01-01

    Background and Objectives: About 20% of patients with inguinal hernia present bilateral hernias in the diagnosis. In these cases, laparoscopic procedure is considered gold standard approach. Mesh fixation is considered important step toward avoiding recurrence. However, because of cost and risk of pain, real need for mesh fixation has been debated. For bilateral inguinal hernias, there are few specific data about non fixation and mesh displacement. We assessed mesh movement in patients who had undergone laparoscopic bilateral inguinal hernia repair without mesh fixation and compared the results with those obtained in patients with unilateral hernia. Methods: From January 2012 through May 2014, 20 consecutive patients with bilateral inguinal hernia underwent TEP repair with no mesh fixation. Results were compared with 50 consecutive patients with unilateral inguinal hernia surgically repaired with similar technique. Mesh was marked with 3 clips. Mesh movements were measured by comparing initial radiography performed at the end of surgery, with a second radiographic scan performed 30 days later. Results: Mean movements of all 3 clips in bilateral nonfixation (NF) group were 0.15–0.4 cm compared with 0.1–0.3 cm in unilateral NF group. Overall displacement of bilateral and unilateral NF groups did not show significant difference. Mean overall displacement was 1.9 cm versus 1.8 cm in the bilateral and unilateral NF groups, respectively (P = .78). Conclusions: TEP with no mesh fixation is safe in bilateral inguinal repairs. Early mesh displacement is minimal. This technique can be safely used in most patients with inguinal hernia. PMID:28904521

  15. A computational method for sharp interface advection

    PubMed Central

    Bredmose, Henrik; Jasak, Hrvoje

    2016-01-01

    We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619

  16. Nanowire mesh solar fuels generator

    DOEpatents

    Yang, Peidong; Chan, Candace; Sun, Jianwei; Liu, Bin

    2016-05-24

    This disclosure provides systems, methods, and apparatus related to a nanowire mesh solar fuels generator. In one aspect, a nanowire mesh solar fuels generator includes (1) a photoanode configured to perform water oxidation and (2) a photocathode configured to perform water reduction. The photocathode is in electrical contact with the photoanode. The photoanode may include a high surface area network of photoanode nanowires. The photocathode may include a high surface area network of photocathode nanowires. In some embodiments, the nanowire mesh solar fuels generator may include an ion conductive polymer infiltrating the photoanode and the photocathode in the region where the photocathode is in electrical contact with the photoanode.

  17. GPU-accelerated Monte Carlo convolution/superposition implementation for dose calculation.

    PubMed

    Zhou, Bo; Yu, Cedric X; Chen, Danny Z; Hu, X Sharon

    2010-11-01

    Dose calculation is a key component in radiation treatment planning systems. Its performance and accuracy are crucial to the quality of treatment plans as emerging advanced radiation therapy technologies are exerting ever tighter constraints on dose calculation. A common practice is to choose either a deterministic method such as the convolution/superposition (CS) method for speed or a Monte Carlo (MC) method for accuracy. The goal of this work is to boost the performance of a hybrid Monte Carlo convolution/superposition (MCCS) method by devising a graphics processing unit (GPU) implementation so as to make the method practical for day-to-day usage. Although the MCCS algorithm combines the merits of MC fluence generation and CS fluence transport, it is still not fast enough to be used as a day-to-day planning tool. To alleviate the speed issue of MC algorithms, the authors adopted MCCS as their target method and implemented a GPU-based version. In order to fully utilize the GPU computing power, the MCCS algorithm is modified to match the GPU hardware architecture. The performance of the authors' GPU-based implementation on an Nvidia GTX260 card is compared to a multithreaded software implementation on a quad-core system. A speedup in the range of 6.7-11.4x is observed for the clinical cases used. The less than 2% statistical fluctuation also indicates that the accuracy of the authors' GPU-based implementation is in good agreement with the results from the quad-core CPU implementation. This work shows that GPU is a feasible and cost-efficient solution compared to other alternatives such as using cluster machines or field-programmable gate arrays for satisfying the increasing demands on computation speed and accuracy of dose calculation. But there are also inherent limitations of using GPU for accelerating MC-type applications, which are also analyzed in detail in this article.

  18. Anisotropic diffusion in mesh-free numerical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2017-04-01

    We extend recently developed mesh-free Lagrangian methods for numerical magnetohydrodynamics (MHD) to arbitrary anisotropic diffusion equations, including: passive scalar diffusion, Spitzer-Braginskii conduction and viscosity, cosmic ray diffusion/streaming, anisotropic radiation transport, non-ideal MHD (Ohmic resistivity, ambipolar diffusion, the Hall effect) and turbulent 'eddy diffusion'. We study these as implemented in the code GIZMO for both new meshless finite-volume Godunov schemes (MFM/MFV). We show that the MFM/MFV methods are accurate and stable even with noisy fields and irregular particle arrangements, and recover the correct behaviour even in arbitrarily anisotropic cases. They are competitive with state-of-the-art AMR/moving-mesh methods, and can correctly treat anisotropic diffusion-driven instabilities (e.g. the MTI and HBI, Hall MRI). We also develop a new scheme for stabilizing anisotropic tensor-valued fluxes with high-order gradient estimators and non-linear flux limiters, which is trivially generalized to AMR/moving-mesh codes. We also present applications of some of these improvements for SPH, in the form of a new integral-Godunov SPH formulation that adopts a moving-least squares gradient estimator and introduces a flux-limited Riemann problem between particles.

  19. A novel partitioning method for block-structured adaptive meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fu, Lin, E-mail: lin.fu@tum.de; Litvinov, Sergej, E-mail: sergej.litvinov@aer.mw.tum.de; Hu, Xiangyu Y., E-mail: xiangyu.hu@tum.de

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtainmore » the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.« less

  20. A novel partitioning method for block-structured adaptive meshes

    NASA Astrophysics Data System (ADS)

    Fu, Lin; Litvinov, Sergej; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2017-07-01

    We propose a novel partitioning method for block-structured adaptive meshes utilizing the meshless Lagrangian particle concept. With the observation that an optimum partitioning has high analogy to the relaxation of a multi-phase fluid to steady state, physically motivated model equations are developed to characterize the background mesh topology and are solved by multi-phase smoothed-particle hydrodynamics. In contrast to well established partitioning approaches, all optimization objectives are implicitly incorporated and achieved during the particle relaxation to stationary state. Distinct partitioning sub-domains are represented by colored particles and separated by a sharp interface with a surface tension model. In order to obtain the particle relaxation, special viscous and skin friction models, coupled with a tailored time integration algorithm are proposed. Numerical experiments show that the present method has several important properties: generation of approximately equal-sized partitions without dependence on the mesh-element type, optimized interface communication between distinct partitioning sub-domains, continuous domain decomposition which is physically localized and implicitly incremental. Therefore it is particularly suitable for load-balancing of high-performance CFD simulations.

  1. Implementation of Implicit Adaptive Mesh Refinement in an Unstructured Finite-Volume Flow Solver

    NASA Technical Reports Server (NTRS)

    Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.

    2013-01-01

    This paper explores the implementation of adaptive mesh refinement in an unstructured, finite-volume solver. Unsteady and steady problems are considered. The effect on the recovery of high-order numerics is explored and the results are favorable. Important to this work is the ability to provide a path for efficient, implicit time advancement. A method using a simple refinement sensor based on undivided differences is discussed and applied to a practical problem: a shock-shock interaction on a hypersonic, inviscid double-wedge. Cases are compared to uniform grids without the use of adapted meshes in order to assess error and computational expense. Discussion of difficulties, advances, and future work prepare this method for additional research. The potential for this method in more complicated flows is described.

  2. MeSHLabeler: improving the accuracy of large-scale MeSH indexing by integrating diverse evidence.

    PubMed

    Liu, Ke; Peng, Shengwen; Wu, Junqiu; Zhai, Chengxiang; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2015-06-15

    Medical Subject Headings (MeSHs) are used by National Library of Medicine (NLM) to index almost all citations in MEDLINE, which greatly facilitates the applications of biomedical information retrieval and text mining. To reduce the time and financial cost of manual annotation, NLM has developed a software package, Medical Text Indexer (MTI), for assisting MeSH annotation, which uses k-nearest neighbors (KNN), pattern matching and indexing rules. Other types of information, such as prediction by MeSH classifiers (trained separately), can also be used for automatic MeSH annotation. However, existing methods cannot effectively integrate multiple evidence for MeSH annotation. We propose a novel framework, MeSHLabeler, to integrate multiple evidence for accurate MeSH annotation by using 'learning to rank'. Evidence includes numerous predictions from MeSH classifiers, KNN, pattern matching, MTI and the correlation between different MeSH terms, etc. Each MeSH classifier is trained independently, and thus prediction scores from different classifiers are incomparable. To address this issue, we have developed an effective score normalization procedure to improve the prediction accuracy. MeSHLabeler won the first place in Task 2A of 2014 BioASQ challenge, achieving the Micro F-measure of 0.6248 for 9,040 citations provided by the BioASQ challenge. Note that this accuracy is around 9.15% higher than 0.5724, obtained by MTI. The software is available upon request. © The Author 2015. Published by Oxford University Press.

  3. Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.

    2015-01-01

    Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.

  4. A simple finite element method for non-divergence form elliptic equation

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-03-01

    Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.

  5. A simple finite element method for non-divergence form elliptic equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.

  6. Quantum state engineering by a coherent superposition of photon subtraction and addition

    NASA Astrophysics Data System (ADS)

    Lee, Su-Yong; Nha, Hyunchul

    2011-10-01

    We study a coherent superposition tâ+r↠of field annihilation and creation operator acting on continuous variable systems and propose its application for quantum state engineering. We propose an experimental scheme to implement this elementary coherent operation and discuss its usefulness to produce an arbitrary superposition of number states involving up to two photons.

  7. Comparison of linear and square superposition hardening models for the surface nanoindentation of ion-irradiated materials

    NASA Astrophysics Data System (ADS)

    Xiao, Xiazi; Yu, Long

    2018-05-01

    Linear and square superposition hardening models are compared for the surface nanoindentation of ion-irradiated materials. Hardening mechanisms of both dislocations and defects within the plasticity affected region (PAR) are considered. Four sets of experimental data for ion-irradiated materials are adopted to compare with theoretical results of the two hardening models. It is indicated that both models describe experimental data equally well when the PAR is within the irradiated layer; whereas, when the PAR is beyond the irradiated region, the square superposition hardening model performs better. Therefore, the square superposition model is recommended to characterize the hardening behavior of ion-irradiated materials.

  8. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    NASA Astrophysics Data System (ADS)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific jet extends eastward and is associated with an upper tropospheric cyclonic (anticyclonic) anomaly in its left (right) exit region. A downstream ridge is present over northwest Canada, and within the strong EAWM environment, a wavier flow over North America is observed relative to the neutral EAWM environment. Preliminary investigation of the two weak EAWM season superpositions reveals a Kona Low type feature post-superposition. This is associated with anomalous convection reminiscent of an atmospheric river southwest of Mexico.

  9. Final Report of the Project "From the finite element method to the virtual element method"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manzini, Gianmarco; Gyrya, Vitaliy

    The Finite Element Method (FEM) is a powerful numerical tool that is being used in a large number of engineering applications. The FEM is constructed on triangular/tetrahedral and quadrilateral/hexahedral meshes. Extending the FEM to general polygonal/polyhedral meshes in straightforward way turns out to be extremely difficult and leads to very complex and computationally expensive schemes. The reason for this failure is that the construction of the basis functions on elements with a very general shape is a non-trivial and complex task. In this project we developed a new family of numerical methods, dubbed the Virtual Element Method (VEM) for themore » numerical approximation of partial differential equations (PDE) of elliptic type suitable to polygonal and polyhedral unstructured meshes. We successfully formulated, implemented and tested these methods and studied both theoretically and numerically their stability, robustness and accuracy for diffusion problems, convection-reaction-diffusion problems, the Stokes equations and the biharmonic equations.« less

  10. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    PubMed Central

    2011-01-01

    Background Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. Methods In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. Results The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE. We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. Conclusions The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions. PMID:21635749

  11. Compatible, total energy conserving and symmetry preserving arbitrary Lagrangian-Eulerian hydrodynamics in 2D rz - Cylindrical coordinates

    NASA Astrophysics Data System (ADS)

    Kenamond, Mack; Bement, Matthew; Shashkov, Mikhail

    2014-07-01

    We present a new discretization for 2D arbitrary Lagrangian-Eulerian hydrodynamics in rz geometry (cylindrical coordinates) that is compatible, total energy conserving and symmetry preserving. In the first part of the paper, we describe the discretization of the basic Lagrangian hydrodynamics equations in axisymmetric 2D rz geometry on general polygonal meshes. It exactly preserves planar, cylindrical and spherical symmetry of the flow on meshes aligned with the flow. In particular, spherical symmetry is preserved on polar equiangular meshes. The discretization conserves total energy exactly up to machine round-off on any mesh. It has a consistent definition of kinetic energy in the zone that is exact for a velocity field with constant magnitude. The method for discretization of the Lagrangian equations is based on ideas presented in [2,3,7], where the authors use a special procedure to distribute zonal mass to corners of the zone (subzonal masses). The momentum equation is discretized in its “Cartesian” form with a special definition of “planar” masses (area-weighted). The principal contributions of this part of the paper are as follows: a definition of “planar” subzonal mass for nodes on the z axis (r=0) that does not require a special procedure for movement of these nodes; proof of conservation of the total energy; formulated for general polygonal meshes. We present numerical examples that demonstrate the robustness of the new method for Lagrangian equations on a variety of grids and test problems including polygonal meshes. In particular, we demonstrate the importance of conservation of total energy for correctly modeling shock waves. In the second part of the paper we describe the remapping stage of the arbitrary Lagrangian-Eulerian algorithm. The general idea is based on the following papers [25-28], where it was described for Cartesian coordinates. We describe a distribution-based algorithm for the definition of remapped subzonal densities and a local constrained-optimization-based approach for each zone to find the subzonal mass fluxes. In this paper we give a systematic and complete description of the algorithm for the axisymmetric case and provide justification for our approach. The ALE algorithm conserves total energy on arbitrary meshes and preserves symmetry when remapping from one equiangular polar mesh to another. The principal contributions of this part of the paper are the extension of this algorithm to general polygonal meshes and 2D rz geometry with requirement of symmetry preservation on special meshes. We present numerical examples that demonstrate the robustness of the new ALE method on a variety of grids and test problems including polygonal meshes and some realistic experiments. We confirm the importance of conservation of total energy for correctly modeling shock waves.

  12. An Immersed Boundary-Lattice Boltzmann Method for Simulating Particulate Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Baili; Cheng, Ming; Lou, Jing

    2013-11-01

    A two-dimensional momentum exchange-based immersed boundary-lattice Boltzmann method developed by X.D. Niu et al. (2006) has been extended in three-dimensions for solving fluid-particles interaction problems. This method combines the most desirable features of the lattice Boltzmann method and the immersed boundary method by using a regular Eulerian mesh for the flow domain and a Lagrangian mesh for the moving particles in the flow field. The non-slip boundary conditions for the fluid and the particles are enforced by adding a force density term into the lattice Boltzmann equation, and the forcing term is simply calculated by the momentum exchange of the boundary particle density distribution functions, which are interpolated by the Lagrangian polynomials from the underlying Eulerian mesh. This method preserves the advantages of lattice Boltzmann method in tracking a group of particles and, at the same time, provides an alternative approach to treat solid-fluid boundary conditions. Numerical validations show that the present method is very accurate and efficient. The present method will be further developed to simulate more complex problems with particle deformation, particle-bubble and particle-droplet interactions.

  13. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  14. Finite cover method with mortar elements for elastoplasticity problems

    NASA Astrophysics Data System (ADS)

    Kurumatani, M.; Terada, K.

    2005-06-01

    Finite cover method (FCM) is extended to elastoplasticity problems. The FCM, which was originally developed under the name of manifold method, has recently been recognized as one of the generalized versions of finite element methods (FEM). Since the mesh for the FCM can be regular and squared regardless of the geometry of structures to be analyzed, structural analysts are released from a burdensome task of generating meshes conforming to physical boundaries. Numerical experiments are carried out to assess the performance of the FCM with such discretization in elastoplasticity problems. Particularly to achieve this accurately, the so-called mortar elements are introduced to impose displacement boundary conditions on the essential boundaries, and displacement compatibility conditions on material interfaces of two-phase materials or on joint surfaces between mutually incompatible meshes. The validity of the mortar approximation is also demonstrated in the elastic-plastic FCM.

  15. A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations

    NASA Technical Reports Server (NTRS)

    Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos

    2009-01-01

    A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.

  16. The lowest-order weak Galerkin finite element method for the Darcy equation on quadrilateral and hybrid meshes

    NASA Astrophysics Data System (ADS)

    Liu, Jiangguo; Tavener, Simon; Wang, Zhuoran

    2018-04-01

    This paper investigates the lowest-order weak Galerkin finite element method for solving the Darcy equation on quadrilateral and hybrid meshes consisting of quadrilaterals and triangles. In this approach, the pressure is approximated by constants in element interiors and on edges. The discrete weak gradients of these constant basis functions are specified in local Raviart-Thomas spaces, specifically RT0 for triangles and unmapped RT[0] for quadrilaterals. These discrete weak gradients are used to approximate the classical gradient when solving the Darcy equation. The method produces continuous normal fluxes and is locally mass-conservative, regardless of mesh quality, and has optimal order convergence in pressure, velocity, and normal flux, when the quadrilaterals are asymptotically parallelograms. Implementation is straightforward and results in symmetric positive-definite discrete linear systems. We present numerical experiments and comparisons with other existing methods.

  17. Advances and applications of ABCI

    NASA Astrophysics Data System (ADS)

    Chin, Y. H.

    1993-05-01

    ABCI (Azimuthal Beam Cavity Interaction) is a computer program which solves the Maxwell equations directly in the time domain when a Gaussian beam goes through an axi-symmetrical structure on or off axis. Many new features have been implemented in the new version of ABCI (presently version 6.6), including the 'moving mesh' and Napoly's method of calculation of wake potentials. The mesh is now generated only for the part of the structure inside a window and moves together with the window frame. This moving mesh option reduces the number of mesh points considerably, and very fine meshes can be used. Napoly's integration method makes it possible to compute wake potentials in a structure such as a collimator, where parts of the cavity material are at smaller radii than that of the beam pipes, in such a way that the contribution from the beam pipes vanishes. For the monopole wake potential, ABCI can be applied even to structures with unequal beam pipe radii. Furthermore, the radial mesh size can be varied over the structure, permitting use a fine mesh only where actually needed. With these improvements, the program allows computation of wake fields for structures far too complicated for older codes. Plots of a cavity shape and wake potentials can be obtained in the form of a Top Drawer file. The program can also calculate and plot the impedance of a structure and/or the distribution of the deposited energy as a function of the frequency from Fourier transforms of wake potentials. Its usefulness is illustrated by showing some numerical examples.

  18. Clinical observation of a modified surgical method: posterior vaginal mesh suspension of female rectocele with intractable constipation.

    PubMed

    Hong, Ling; Li, Huai-Fang; Sun, Jing; Zhu, Jian-Long; Ai, Gui-hai; Li, Li; Zhang, Bo; Chi, Feng-li; Tong, Xiao-Wen

    2012-01-01

    To explore the feasibility and effectiveness of a modified posterior vaginal mesh suspension method in treating female rectocele with intractable constipation. Descriptive study (Canadian Task Force classification II-3). The study was performed in the Study Center for Female Pelvic Dysfunction Disease, Department of Obstetrics and Gynecology, Tongji Hospital, Tongji University School of Medicine, Shanghai, China. The Study Center includes 15 physicians, most of whom have received advanced training in pelvic floor dysfunctional disease and can skillfully perform many types of operations in patients with such disease. Almost 1500 operations to treat pelvic floor dysfunctional disease are performed every year at the center. Thirty-six women with rectocele with intractable constipation. Posterior vaginal mesh suspension. All patients were followed up for 15 to 36 months. In 29 patients, the condition was cured completely; in 5 patients it had improved; and in 2 patients, the intervention had no effect. Insofar as recovery and improved results, the overall effectiveness rate was 94.4%. Posterior vaginal mesh suspension is an effective, harmless, and convenient method for treatment of female rectocele with intractable constipation. It has positive short-term curative effects, with few complications and sequelae. However, the long-term effects of posterior vaginal mesh suspension should be evaluated. Copyright © 2012 AAGL. Published by Elsevier Inc. All rights reserved.

  19. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  20. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  1. Semi-automatic sparse preconditioners for high-order finite element methods on non-uniform meshes

    NASA Astrophysics Data System (ADS)

    Austin, Travis M.; Brezina, Marian; Jamroz, Ben; Jhurani, Chetan; Manteuffel, Thomas A.; Ruge, John

    2012-05-01

    High-order finite elements often have a higher accuracy per degree of freedom than the classical low-order finite elements. However, in the context of implicit time-stepping methods, high-order finite elements present challenges to the construction of efficient simulations due to the high cost of inverting the denser finite element matrix. There are many cases where simulations are limited by the memory required to store the matrix and/or the algorithmic components of the linear solver. We are particularly interested in preconditioned Krylov methods for linear systems generated by discretization of elliptic partial differential equations with high-order finite elements. Using a preconditioner like Algebraic Multigrid can be costly in terms of memory due to the need to store matrix information at the various levels. We present a novel method for defining a preconditioner for systems generated by high-order finite elements that is based on a much sparser system than the original high-order finite element system. We investigate the performance for non-uniform meshes on a cube and a cubed sphere mesh, showing that the sparser preconditioner is more efficient and uses significantly less memory. Finally, we explore new methods to construct the sparse preconditioner and examine their effectiveness for non-uniform meshes. We compare results to a direct use of Algebraic Multigrid as a preconditioner and to a two-level additive Schwarz method.

  2. TAS: A Transonic Aircraft/Store flow field prediction code

    NASA Technical Reports Server (NTRS)

    Thompson, D. S.

    1983-01-01

    A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.

  3. Single fiber model of particle retention in an acoustically driven porous mesh.

    PubMed

    Grossner, Michael T; Penrod, Alan E; Belovich, Joanne M; Feke, Donald L

    2003-03-01

    A method for the capture of small particles (tens of microns in diameter) from a continuously flowing suspension has recently been reported. This technique relies on a standing acoustic wave resonating in a rectangular chamber filled with a high-porosity mesh. Particles are retained in this chamber via a complex interaction between the acoustic field and the porous mesh. Although the mesh has a pore size two orders of magnitude larger than the particle diameter, collection efficiencies of 90% have been measured. A mathematical model has been developed to understand the experimentally observed phenomena and to be able to predict filtration performance. By examining a small region (a single fiber) of the porous mesh, the model has duplicated several experimental events such as the focusing of particles near an element of the mesh and the levitation of particles within pores. The single-fiber analysis forms the basis of modeling the overall performance of the particle filtration system. Copyright 2002 Elsevier Science B.V.

  4. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    NASA Astrophysics Data System (ADS)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  5. Graded meshes in bio-thermal problems with transmission-line modeling method.

    PubMed

    Milan, Hugo F M; Carvalho, Carlos A T; Maia, Alex S C; Gebremedhin, Kifle G

    2014-10-01

    In this study, the transmission-line modeling (TLM) applied to bio-thermal problems was improved by incorporating several novel computational techniques, which include application of graded meshes which resulted in 9 times faster in computational time and uses only a fraction (16%) of the computational resources used by regular meshes in analyzing heat flow through heterogeneous media. Graded meshes, unlike regular meshes, allow heat sources to be modeled in all segments of the mesh. A new boundary condition that considers thermal properties and thus resulting in a more realistic modeling of complex problems is introduced. Also, a new way of calculating an error parameter is introduced. The calculated temperatures between nodes were compared against the results obtained from the literature and agreed within less than 1% difference. It is reasonable, therefore, to conclude that the improved TLM model described herein has great potential in heat transfer of biological systems. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Stability of phases of a square-well fluid within superposition approximation

    NASA Astrophysics Data System (ADS)

    Piasecki, Jarosław; Szymczak, Piotr; Kozak, John J.

    2013-04-01

    The analytic and numerical methods introduced previously to study the phase behavior of hard sphere fluids starting from the Yvon-Born-Green (YBG) equation under the Kirkwood superposition approximation (KSA) are adapted to the square-well fluid. We are able to show conclusively that the YBG equation under the KSA closure when applied to the square-well fluid: (i) predicts the existence of an absolute stability limit corresponding to freezing where undamped oscillations appear in the long-distance behavior of correlations, (ii) in accordance with earlier studies reveals the existence of a liquid-vapor transition by the appearance of a "near-critical region" where monotonically decaying correlations acquire very long range, although the system never loses stability.

  7. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    NASA Astrophysics Data System (ADS)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  8. A network medicine approach to quantify distance between hereditary disease modules on the interactome

    NASA Astrophysics Data System (ADS)

    Caniza, Horacio; Romero, Alfonso E.; Paccanaro, Alberto

    2015-12-01

    We introduce a MeSH-based method that accurately quantifies similarity between heritable diseases at molecular level. This method effectively brings together the existing information about diseases that is scattered across the vast corpus of biomedical literature. We prove that sets of MeSH terms provide a highly descriptive representation of heritable disease and that the structure of MeSH provides a natural way of combining individual MeSH vocabularies. We show that our measure can be used effectively in the prediction of candidate disease genes. We developed a web application to query more than 28.5 million relationships between 7,574 hereditary diseases (96% of OMIM) based on our similarity measure.

  9. Development of quadrilateral spline thin plate elements using the B-net method

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Li, Chong-Jun

    2013-08-01

    The quadrilateral discrete Kirchhoff thin plate bending element DKQ is based on the isoparametric element Q8, however, the accuracy of the isoparametric quadrilateral elements will drop significantly due to mesh distortions. In a previouswork, we constructed an 8-node quadrilateral spline element L8 using the triangular area coordinates and the B-net method, which can be insensitive to mesh distortions and possess the second order completeness in the Cartesian coordinates. In this paper, a thin plate spline element is developed based on the spline element L8 and the refined technique. Numerical examples show that the present element indeed possesses higher accuracy than the DKQ element for distorted meshes.

  10. Grid adaption using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  11. On randomized algorithms for numerical solution of applied Fredholm integral equations of the second kind

    NASA Astrophysics Data System (ADS)

    Voytishek, Anton V.; Shipilov, Nikolay M.

    2017-11-01

    In this paper, the systematization of numerical (implemented on a computer) randomized functional algorithms for approximation of a solution of Fredholm integral equation of the second kind is carried out. Wherein, three types of such algorithms are distinguished: the projection, the mesh and the projection-mesh methods. The possibilities for usage of these algorithms for solution of practically important problems is investigated in detail. The disadvantages of the mesh algorithms, related to the necessity of calculation values of the kernels of integral equations in fixed points, are identified. On practice, these kernels have integrated singularities, and calculation of their values is impossible. Thus, for applied problems, related to solving Fredholm integral equation of the second kind, it is expedient to use not mesh, but the projection and the projection-mesh randomized algorithms.

  12. Recent Enhancements To The FUN3D Flow Solver For Moving-Mesh Applications

    NASA Technical Reports Server (NTRS)

    Biedron, Robert T,; Thomas, James L.

    2009-01-01

    An unsteady Reynolds-averaged Navier-Stokes solver for unstructured grids has been extended to handle general mesh movement involving rigid, deforming, and overset meshes. Mesh deformation is achieved through analogy to elastic media by solving the linear elasticity equations. A general method for specifying the motion of moving bodies within the mesh has been implemented that allows for inherited motion through parent-child relationships, enabling simulations involving multiple moving bodies. Several example calculations are shown to illustrate the range of potential applications. For problems in which an isolated body is rotating with a fixed rate, a noninertial reference-frame formulation is available. An example calculation for a tilt-wing rotor is used to demonstrate that the time-dependent moving grid and noninertial formulations produce the same results in the limit of zero time-step size.

  13. Grid adaptation using chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1994-01-01

    The objective of this paper is to perform grid adaptation using composite overlapping meshes in regions of large gradient to accurately capture the salient features during computation. The chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using trilinear interpolation. Application to the Euler equations for shock reflections and to shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well-resolved.

  14. Grid adaptation using Chimera composite overlapping meshes

    NASA Technical Reports Server (NTRS)

    Kao, Kai-Hsiung; Liou, Meng-Sing; Chow, Chuen-Yen

    1993-01-01

    The objective of this paper is to perform grid adaptation using composite over-lapping meshes in regions of large gradient to capture the salient features accurately during computation. The Chimera grid scheme, a multiple overset mesh technique, is used in combination with a Navier-Stokes solver. The numerical solution is first converged to a steady state based on an initial coarse mesh. Solution-adaptive enhancement is then performed by using a secondary fine grid system which oversets on top of the base grid in the high-gradient region, but without requiring the mesh boundaries to join in any special way. Communications through boundary interfaces between those separated grids are carried out using tri-linear interpolation. Applications to the Euler equations for shock reflections and to a shock wave/boundary layer interaction problem are tested. With the present method, the salient features are well resolved.

  15. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  16. An engineering closure for heavily under-resolved coarse-grid CFD in large applications

    NASA Astrophysics Data System (ADS)

    Class, Andreas G.; Yu, Fujiang; Jordan, Thomas

    2016-11-01

    Even though high performance computation allows very detailed description of a wide range of scales in scientific computations, engineering simulations used for design studies commonly merely resolve the large scales thus speeding up simulation time. The coarse-grid CFD (CGCFD) methodology is developed for flows with repeated flow patterns as often observed in heat exchangers or porous structures. It is proposed to use inviscid Euler equations on a very coarse numerical mesh. This coarse mesh needs not to conform to the geometry in all details. To reinstall physics on all smaller scales cheap subgrid models are employed. Subgrid models are systematically constructed by analyzing well-resolved generic representative simulations. By varying the flow conditions in these simulations correlations are obtained. These comprehend for each individual coarse mesh cell a volume force vector and volume porosity. Moreover, for all vertices, surface porosities are derived. CGCFD is related to the immersed boundary method as both exploit volume forces and non-body conformal meshes. Yet, CGCFD differs with respect to the coarser mesh and the use of Euler equations. We will describe the methodology based on a simple test case and the application of the method to a 127 pin wire-wrap fuel bundle.

  17. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  18. Litigated Metal Clusters - Structures, Energy and Reactivity

    DTIC Science & Technology

    2016-04-01

    projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate cross sections for elongated molecules...superposition approximation ( PSA ) is now complete. We have made it available free of charge to the scientific community on a dedicated website at UCSB. We...by AFOSR. We continued to improve the projection superposition approximation ( PSA ) algorithm through a more careful consideration of how to calculate

  19. Multichannel Polarization-Controllable Superpositions of Orbital Angular Momentum States.

    PubMed

    Yue, Fuyong; Wen, Dandan; Zhang, Chunmei; Gerardot, Brian D; Wang, Wei; Zhang, Shuang; Chen, Xianzhong

    2017-04-01

    A facile metasurface approach is shown to realize polarization-controllable multichannel superpositions of orbital angular momentum (OAM) states with various topological charges. By manipulating the polarization state of the incident light, four kinds of superpositions of OAM states are realized using a single metasurface consisting of space-variant arrays of gold nanoantennas. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Towards hybrid mesh generation for realistic design environments

    NASA Astrophysics Data System (ADS)

    McMorris, Harlan Tom

    Two different techniques that allow hybrid mesh generation to be easily used in the design environment are presented. The purpose of this research is to allow for hybrid meshes to be used during the design process where the geometry is being varied. The first technique, modular hybrid mesh generation, allows for the replacement of portions of a geometry with a new design shape. The mesh is maintained for the portions of that have not changed during the design process. A new mesh is generated for the new part of the geometry and this piece is added to the existing mesh. The new mesh must match the remaining portions of the geometry plus the element sizes must match exactly across the interfaces. The second technique, hybrid mesh movement, is used when the basic geometry remains the same with only small variations to portions of the geometry. These small variations include changing the cross-section of a wing, twisting a blade or changing the length of some portion of the geometry. The mesh for the original geometry is moved onto the new geometry one step at a time beginning with the curves of the surface, continuing with the surface mesh geometry and ending with the interior points of the mesh. The validity of the hybrid mesh is maintained by applying corrections to the motion of the points. Finally, the quality of the new hybrid mesh is improved to ensure that the new mesh maintains the quality of the original hybrid mesh. Applications of both design techniques are applied to typical example cases from the fields of turbomachinery, aerospace and offshore technology. The example test cases demonstrate the ability of the two techniques to reuse the majority of an existing hybrid mesh for typical design changes. Modular mesh generation is used to change the shape of piece of a seafloor pipeline geometry to a completely different configuration. The hybrid mesh movement technique is used to change the twist of a turbomachinery blade, the tip clearance of a rotor blade and to simulate the aeroelastic bending of a wing. Finally, the movement technique is applied to an offshore application where the solution for the original configuration is used as a starting point for solution for a new configuration. The application of both techniques show that the methods can be a powerful addition to the design environment and will facilitate a rapid turnaround when the design geometry changes.

Top